Ok, i would like to hear everyone’s recommendations on what they think the best file-sharing protocol is, and why?
I have been using SMB, however it really just is to slow to browse any directory/folder with more than 100 items in it.
Ok, i would like to hear everyone’s recommendations on what they think the best file-sharing protocol is, and why?
I have been using SMB, however it really just is to slow to browse any directory/folder with more than 100 items in it.
I largerly just use SMB since I have windows clients at home and my wife uses my NAS for some stuff. I don’t have any issues with my larger directories (I have one with well lover 300 separate files and folders). You may need to tweak some settings within your SMB server to improve performance.
For managing files on my servers I use SFTP. Seems to be a lot more responsive and usually works with no problems.
Rsync is pretty solid for syncing files and backups as needed, though, I’d probably use sycnthing since it’s more user friendly out of the box.
when you say tweak settings which settings are you tweaking?
So can very much depend for the performance issues. Could be client side or server side. Also can depend on what server your hosting SMB on, I noticed performance difference when running SMB shared on Windows Server vs Linux. But some basic stuff like disabling Mac support, encryption, server signing, etc and enabling stuff like getwd cache can really help. Formatting the drives to NTFS have also seemed to work really well, though can cause some issues with linux based clients. There are quite few settings that can be changed within SMB to get the most out of it. Though it is hard for me to give good recommendation as I don’t know what your setup looks like.
The protocol is not what limits you, your hardware and configuration is the problem. For Samba disable atime, that should help a bit, but there are plenty of tuning options, just describe your setup.
I can tell you hardware is not the limiting factor. It may not be smb, and may be something else. However, locally from the server, ls and du both are able to return near instantaneous results on large directories. I will have to do some experimentation.
What is the best file-sharing protocol?
To crush your enemies, see them driven before you, and to hear the lamentations of their women.
Oh wait, that’s the answer to a different question. Back to your actual question…
If your client is Windows, I think you’ll find that the real problem isn’t that the server is CIFS or SAMBA but that Windows Explorer is incurring too much extra traffic to show you file properties and icons that you’re not interested in.
Kick it old school and go to the Windows Store, and search for Windows File Manager by Microsoft.
What’s the server?
What’s the client?
What’s the goal?
Ok i did do some testing, found basically what was said here:
except with multiple different file browsers on windows and linux.
My main os and and file browser is Arch Linux and Nemo. However, using Windows Explorer has the same issues. Which is why i originally assumed it was SMB.
I dont think the issue is with retrieving metadata, so much as how the requests are queued and/or made. Via terminal or command line smb appears to operate mostly flawlessly across windows and linux.
Also wrote a quick c++ test to use std::filesystem and imgui to access the smb share’s and my test program also did not exhibit any latency or lag to get and display the type of information displayed by default in many of these gui file browsers. This test was done on linux. i have not complied it for windows, because i would have to do that on my work laptop which i should not be doing that on.
Surprisingly Dolphin on Linux was the quickest, although it still got hungup on scrolling. Normally i don’t have Dolphin installed because it comes with many software dependencies, including a file indexer, that i have not figured out how to disable. I hate file indexers, their always trying to index whats on my nas which is over 20tb of data currently.
To not have a laggy experience with browsing file-shares. The problem is amplified when browsing non-local shares to the network i’m on.
That sounds like you have ZFS with the pool metadata on the HDDs. Are you using ZFS?
What are you trying to share, how, and what OS(s) are involved? Because fiber channel is about as rock solid as it gets, but it is nowhere near as convenient as SMB.
If you need windows to talk to linux, SMB is easy to not very robust. iSCSi can work, but takes a LOT of effort to get it working (unless non-server editions of windows completely removed iSCSI? haven’t tried in a while).
If you aren’t trying to watch a movie over SMB from your NAS, but just want an ability to store bulk files, normal unencrypted FTP works great. You can easily hit line speed with that.
SMB and NFS are the two big network filesystems. iSCSI doesn’t really count because you can only mount a LUN on a single device and stuff isn’t shareable for x amount of people. And maybe even CephFS, but not everyone has access to a cluster.
Both SMB and NFS don’t differ much in performance, it is often just a matter of features and OS support (no NFS for windows) what you choose.
If stuff is slow, it usually (but not always) is relatively high latency and IOPS to the storage. browsing directories with thumbnails and stuff is metadata-heavy and requires a lot of seek and bidirectional communication, which is naturally much slower over a network with latency.
SSDs as backing storage, cached metadata (warmup server cache), “proper” networking with low latency and also misc config will help.
see an above post below vvvvvvvvv
This is a zfs pool but it has a ssd special. But trust me this is not a hardware limitation. I have experienced this issue whith file shares hosted on ssd’s.
See my above longer post about my finding.
If you read my above about the tests, its not about smb or my hardware, so much as how some file explorers are choosing to acquire the data. If you wanna test this for your self i would suggest comparing nautilus and dolphin side by side in the largest directory by folder and file count. The two perform very noticeably different. Also they both don’t exhibit the same behavior with folders on a local drive.
Side tangent
This is on my list of things to setup and check out, separate of this issue. I have enough drives and pcs. Just haven’t attempted yet. Is there a guid you recommend for CephFS.
No…you can have multiple initiators connect to a single target. You do need special target software to handle concurrent access, but it is 100% possible to have a single LUN shared via iscsi to multiple initiators.
I agree. Software can mess up things quite a bit. I usually have Dolphin as I mostly am a KDE guy, so I just see differences in Dolphin behaviour with varying storage setups.
It won’t be faster, I can tell you that much. But CephFS is much more scalable with dedicated metadata servers. The two easy ways to get a test cluster up and running are:
3x Proxmox VM, install Ceph, setup CephFS and mount share. works like SMB/NFS export and (even Windows can install Ceph to mount CephFS and RBD)
3x Linux VM (I run with Rocky) and manually install via CLI. Best method for learning but more tedious
Installation (Manual) — Ceph Documentation
3 separate (enterprise) SSDs for the VMs will speed things up, but if it is just for testing and evaluation and not for performance, a couple of good old VM disk files like qcow or better raw will do.
P.S.: you can have millions of files in a CephFS directory and the metadata servers will even load balance intra-directory metadata to serve potential hot spots in load and demand. Ceph is designed for billions of files and objects, just gotta stack up on metadata servers
Are you sure, if it’s not the metadata, the file explorers might still be loading files for previews? Although I suspect this behavior should’ve been disabled in Windows Explorer by default. All I can say, Dolphin behaves worse than Windows Explorer in a folder with 1000 web videos (NTFS partition, local). It seems to be freezing/stuttering too until the entire folder was cached in current view.
Yes, watching both the client and server, i can tell you it has nothing to do with drive/pool read speeds, or any network bandwidth(note im leaving out latency here) issue.
As far as i can tell i think this is a request latency issue.
Interesting. Why are you using a NTFS partition with linux? There are many holes to fall into accessing a NTFS partition directly from Linux . Not sure if you have not found one.
I still prefer NFS for all my Linux boxes. I tried going to SMB/CIFS on Linux, but there are just way too many Posix specific things missing, and permissions/ownership becomes its own beast on SMB.
NFSv4 encryption is insane to setup (You need an entire Kerberos setup), so what I did instead was set up Wireguard internally in my house. I have DNS setup for the wireguard IPs, my NAS is the wireguard server and all my Linux devices are clients.
Now I get encryption internally + NFS.
Based on my intuition, Dolphin essentially prefetches the first MiB or whatever of all videos to generate previews, file type detection for “open with” and whatnot (numbers not real). This can be interpreted as a latency issue too.
Why NTFS: migrated to Linux recently and haven’t had time nor space to redo my hard drives and partitions. I might still be dual-booting in the future, so idk how much I will change. It’s a data-only partition for both systems anyway.