I’ve been running my newly built FreeNAS server for about 1 month. I’ve been copying files and using it as media server for my video editing team. Recently I’ve had to copy a project folder from an external drive into the FreeNAS (8 drive vdev in RAIDZ2) and 5GB (small and large files combined) took 1min 40sec. During the transfer I noticed that the transfer timing went from under 1min to 10mins. It took forever to copy 60MB from part of that copy.
I then had to also copy this project folder to another server running OpenMediaVault running RAID 1. It took only 30sec. The exact same folder. What gives?
Is there a way for FreeNAS to copy as fast as OMV? Do I have to setup cache or maybe other settings I have to change on the FreeNAS?
There are lot of moving factors. The two big things are filesystems and the networking. OpenMediaVault is based off of linux and FreeNAS is BSD based.
FreeNAS is based off of ZFS, it is a very featureful and power filesystem. There are lots of ways to configure it. We need more information about hardware and software configuration.
As for networking, make sure that the drivers are setup well and the driver is well supported by the supporting OS. Realtek drivers are notoriously funky.
As for the tuning, I have already set the FreeNAS according to this two weeks ago. I’ve also set the dataset to case insensitive. I’ve successfully have 5 video editors working simultaneously no problems with ProRes footage.
Just trying to sort out this weird bottleneck since I’ve successfully gotten 1050MB/s transfer on my 10G connections.
I tried AFP. It’s faster than SMB. Still not as fast as transferring to OMV. Also, I cannot open my FCPX library using AFP.
I then tested SMB with turning off the packet signing as per your suggestion. Speed still the same. Using netdata, this is what I got. The file transfer slows down then it picks up again, but never as fast as the OMV.
Tried using NFS. Don’t get alerts. Transfers faster than AFP and SMB. FCPX libraries works. Now gotta figure out limited access to certain datasets (on FreeNAS) and also figure out if I can connect to the FreeNAS using the server name instead of the IP address.
1 pool of 8 drive vdevs in RAIDz2, default compression, no encryption.
The one from 45drives article? I’ll try again.
would turning adjusting the sync to either off or always on the share/dataset make a difference? I read somewhere that NFS on FreeNAS has this on always by default. I would’ve thought this would slow down transfers.
IIRC, the default setting just lets whatever application/protocol handle syncing. I never change it. It might be appropriate in some VM storage situations.
Do you have a zil or l2arc?
What’s your record size?
I don’t think it would make this much of a difference, but you can turn atime off on the dataset.
Are you on latest FreeNAS? There was a bug a few versions back that kind of looked like this.
network: check iperf3 between the two devices to make sure you’re able to utilize the network to its full capacity
storage: check zpool status for errors or active scrubs etc. Find an appropriate looking fio example and edit it for your needs, to check filesystem performance.
iperf3 at 1.09GB/s up and 980MB/s down. Network not issue. I’m gonna have to come back to this later. Gotta figure out how to setup PostgresSQL for DaVinci Resolve project collaboration. Web articles I found has got me stumped: Guide - PostgreSQL 11.1 and pgAdmin 4.3 in iocage
Zil is probably causing more harm than good. Might try turning l2arc off as well. Sometimes freeing up the ram l2arc uses is a net performance win, although with 64Gb, l2arc is probably beneficial.
I use 1M records for large sequential workloads, although it shouldn’t make a huge difference.
Thanks man. I finally got the performance near par. When I made the dataset, I made it as a Unix but shared it as an SMB. I made a new Windows dataset with SMB share, now it’s all good. Thank you all the help
I am ASTOUNDED that no one pointed out the reason to you …
you’re comparing RAID 6 (that’s essentially what RAIDZ2 is) … to RAID 1 !
(calculating parity = possibly slower than a single drive )
– vs –
RAID 1 … Pending on the system, it could theoretically indicate that the copies complete if the register then keeps a record that it needs to exchange their respective halves of the data at a later point.
Certainly, when reading data RAID 1 will definitely be faster as it (in most cases) tries to provide RAID 0 like performance; where each drive does half the work.
In this case however, only the complexity of RAID6 vs RAID1 need be pointed out to address the ‘apparent mystery in their respective upload speeds’ …
Depending on the hardware available, the parity algorithm shouldn’t be the problem.
Speed recorded at w=429MB/s for a 6 drive raidz2, or w=317MB/s for 12 drives, so 8 drives should easily sit between them, and get like a 5x faster transfer than OP reported.
There’s a bottleneck somewhere, but rather than the algorithm, it would be more likely the CPU, if it was a single core / dual core low powered device
For further reading, someone once did a bunch of tests: