FreeNAS Samba Transfer speed not as fast as OpenMediaVault

Hi,

I’ve been running my newly built FreeNAS server for about 1 month. I’ve been copying files and using it as media server for my video editing team. Recently I’ve had to copy a project folder from an external drive into the FreeNAS (8 drive vdev in RAIDZ2) and 5GB (small and large files combined) took 1min 40sec. During the transfer I noticed that the transfer timing went from under 1min to 10mins. It took forever to copy 60MB from part of that copy.

I then had to also copy this project folder to another server running OpenMediaVault running RAID 1. It took only 30sec. The exact same folder. What gives?

Is there a way for FreeNAS to copy as fast as OMV? Do I have to setup cache or maybe other settings I have to change on the FreeNAS?

Any suggestions appreciated

There are lot of moving factors. The two big things are filesystems and the networking. OpenMediaVault is based off of linux and FreeNAS is BSD based.

FreeNAS is based off of ZFS, it is a very featureful and power filesystem. There are lots of ways to configure it. We need more information about hardware and software configuration.

As for networking, make sure that the drivers are setup well and the driver is well supported by the supporting OS. Realtek drivers are notoriously funky.

My FreeNAS is running on
Ryzen 7 2700
ASRock X370 Taichi
64GB RAM
8 WDC RED
Intel X540-T2

The OMV is running on
i7-3700K
Gigabyte Z77-DS3H
16GB RAM
2 WDC RED
Intel X540-T2

Both connected to Netgear 708 10G switch

Forgot to mention, I’m copying from a Samsung T5 SSD using a Macbook Pro 15 with 10Gbe through Thunderbolt3 adaptor.

the cool thing about freenas are all the performance metrics you can watch while you are using it…

have you reviewed them during a transfer to see if you spot any issues?

what specifically should I be looking at?

Try afp to see if it’s a macOS/smb issue.

Then try tuning FreeBSD for 10gbe per:

1 Like

I will try AFP.

As for the tuning, I have already set the FreeNAS according to this two weeks ago. I’ve also set the dataset to case insensitive. I’ve successfully have 5 video editors working simultaneously no problems with ProRes footage.

Just trying to sort out this weird bottleneck since I’ve successfully gotten 1050MB/s transfer on my 10G connections.

This is a thing as well:

Does the processor on either side spike during the transfer?

I tried AFP. It’s faster than SMB. Still not as fast as transferring to OMV. Also, I cannot open my FCPX library using AFP.

I then tested SMB with turning off the packet signing as per your suggestion. Speed still the same. Using netdata, this is what I got. The file transfer slows down then it picks up again, but never as fast as the OMV.

Tried using NFS. Don’t get alerts. Transfers faster than AFP and SMB. FCPX libraries works. Now gotta figure out limited access to certain datasets (on FreeNAS) and also figure out if I can connect to the FreeNAS using the server name instead of the IP address.

This is expected.

Still by a large margin?

This is expected.

This is expected (I turn this particular alarm off, it is designed for a server that gets a constant stream of predictable traffic).

This is unexpected.

This is expected.


How exactly is your pool configured? Layout, compression, etc…

Maybe try disabling the NIC tuning. Something in there might not be playing nice with your network and/or afp/smb protocols.

Yes, 1min AFP on FreeNAS vs 30sec SMB on OMV

Noted

1 pool of 8 drive vdevs in RAIDz2, default compression, no encryption.

The one from 45drives article? I’ll try again.

would turning adjusting the sync to either off or always on the share/dataset make a difference? I read somewhere that NFS on FreeNAS has this on always by default. I would’ve thought this would slow down transfers.

1 Like

IIRC, the default setting just lets whatever application/protocol handle syncing. I never change it. It might be appropriate in some VM storage situations.

Do you have a zil or l2arc?

What’s your record size?

I don’t think it would make this much of a difference, but you can turn atime off on the dataset.

Are you on latest FreeNAS? There was a bug a few versions back that kind of looked like this.

@freqlabs do you have any ideas?

What version, what’s running, jails, VMs, etc?

  1. Save a backup of your config.
  2. Set all the tuning back to defaults.
  3. Check one thing at a time:
  • network: check iperf3 between the two devices to make sure you’re able to utilize the network to its full capacity
  • storage: check zpool status for errors or active scrubs etc. Find an appropriate looking fio example and edit it for your needs, to check filesystem performance.

iperf3 at 1.09GB/s up and 980MB/s down. Network not issue. I’m gonna have to come back to this later. Gotta figure out how to setup PostgresSQL for DaVinci Resolve project collaboration. Web articles I found has got me stumped:
Guide - PostgreSQL 11.1 and pgAdmin 4.3 in iocage

I have a 256GB NVMe as l2arc. I don’t dare using zil with non-Optane SSD

record size is at 128K (default)

On the latest freenas 11.2U5

Zil is probably causing more harm than good. Might try turning l2arc off as well. Sometimes freeing up the ram l2arc uses is a net performance win, although with 64Gb, l2arc is probably beneficial.

I use 1M records for large sequential workloads, although it shouldn’t make a huge difference.

1 Like

Thanks man. I finally got the performance near par. When I made the dataset, I made it as a Unix but shared it as an SMB. I made a new Windows dataset with SMB share, now it’s all good. Thank you all the help

1 Like

I am ASTOUNDED that no one pointed out the reason to you …

you’re comparing RAID 6 (that’s essentially what RAIDZ2 is) … to RAID 1 !
(calculating parity = possibly slower than a single drive )

– vs –

RAID 1 … Pending on the system, it could theoretically indicate that the copies complete if the register then keeps a record that it needs to exchange their respective halves of the data at a later point.

Certainly, when reading data RAID 1 will definitely be faster as it (in most cases) tries to provide RAID 0 like performance; where each drive does half the work.

In this case however, only the complexity of RAID6 vs RAID1 need be pointed out to address the ‘apparent mystery in their respective upload speeds’ …

:slight_smile:

Hi, thanks for the getting involved!

Depending on the hardware available, the parity algorithm shouldn’t be the problem.
Speed recorded at w=429MB/s for a 6 drive raidz2, or w=317MB/s for 12 drives, so 8 drives should easily sit between them, and get like a 5x faster transfer than OP reported.

There’s a bottleneck somewhere, but rather than the algorithm, it would be more likely the CPU, if it was a single core / dual core low powered device

For further reading, someone once did a bunch of tests:

https://calomel.org/zfs_raid_speed_capacity.html