TrueNAS Scale - 25GbE setup; questions

Hey Gang,

Our TrueNAS Scale media server has come together:

  • SuperMicro H12SSL-NT
  • 128TB - 8 x 16TB RDIMMS MTA9ASF2G72PZ-3G2E1
  • AMD EPYC 7543P
  • Samsung Pro 980 1TB qty 2 mirror (VM - Win 10)
  • Lexar NS100 128GB qty 2 mirror (TrueNAS OS)
  • WD Gold 16TB qty 8
  • Intel XXV710-DA2 NIC

The goal is to use this NAS as our primary media storage in a video editing environment (primarily RED R3D files). We want to edit on the NAS i.e., not copy media to the local workstations.

Our 25GbE switch:

A few questions:

  1. How do we tell TrueNAS Scale to use the 25GbE NIC as a priority?

  2. I have configured the NICs on the client workstation running Windows 10 Pro to prioritize the 25GbE NIC

Migrating data to the NAS, it will often use the 10GbE path instead of the 25GbE path. Throughput is what I would consider tpyical for 10GbE. However, even if I can successfully get the client and the NAS to use the 25GbE path, throughput is still at or below the 10GbE speeds ~ 500MB/s. I would expect >1GB/s. TrueNAS server showing 25GbE speed:

Screenshot 2022-04-29 072437

With 8 WD Gold drives in the NAS at RAIDZ1 I should easily get GB transfers - no? Max sustained on the WD Golds of 262MB/s. With one drive given up to parity in the NAS, that leaves 7 drives x 262MB/s = 1,834MB/s. Even if you only got 60% after parity, that’s still north of 1GB/s transfer rates.

Help? TIA

Hi,
could you answer the following:

  • Are you using NFS or SMB for File Sharing?
  • What is your Networking Setup/ Requirements with Scale? (nics, vlans, separation)
  • Did you test the ZFS speed locally on the system with something simple like dd or more advanced like the Phoronix Suite?

Samba

Intel XVV710-DA2 NIC per above plus the onboard Braodcom NICs. I’m guessing you are seeking more details than just the NIC HW?

No VLANS

Separation? Not sure what you’re asking.

No. I have wondered about doing a local test - suggestions?

1 Like

TrueNAS Scale

  • Intel XXV710-DA2 | QNAP Switch | Netgear 10GbE Switch | Router

  • Broadcom | Netgear 10GbE Switch | Router

  • IPMI | Netgear 10GbE Switch | Router

Does that help?

1 Like

Yes, it does. I was asking whether there’s any point in using multiple NICs for the same task - e.g. because they’re selected to separate VLANs.

What you should do if you want to only use the 25Gbit NIC for File Sharing:

  • Configure it with a Static IP in the Networking Section
    (Remember to set Route and DNS if you were using DHCP)
  • Go to System Settings → Services → SMB → Edit → Advanced and bind it to that IP

In that case I’d also recommend that you go to System Settings → General and set the Web Interface Address to the one of your Broadcom NIC if you can put it on a mangement vlan, so you can separate Management and File Sharing.

About your Speed Issues:

  • Again, Go to System Settings → Services → SMB → Edit → Advanced
    → Under Auxiliary parameters, add server multi channel support = yes to be safe
  • What are the Settings on your Dataset(s)?
    → Atime and Deduplication need to be off on your pool and dataset
    → Compresion and Encryption should be fine with that beefy CPU, but it wouldn’t hurt to test

I’d write to a Dataset, or multiple if you want to test out different settings, using dd and /dev/random as a source.

Alternatively, pass through the Dataset Host Path to a Linux VM with the Phoronix Test Suite. But keep in mind that you need to test the passed through storage, VM Disk Performance will be misleading.

Edit: If you want to test SMB on the machine, create a bridge interface with the nic that you use for file sharing as a member and put a Windows VM on that bridge too.
That would rule out networking as a cause for slow SMB speeds.

If only that worked! :slight_smile:

It’s reported as a bug in the current build.

I’ve seen rare occasions where I needed to try a second time, but generally setting up a bridge works fine for me.
It’s how I run my Dockers and VMs Networking right now.

What is a bit tricky is that you have to test the settings. So if you change your management IP, you have to switch over quickly enough to confirm the changes.

In any case, you could also configure this from the Shell via IPMI.

Should you need any extra guidance, feel free to ask.

What about just going to a single NIC - the Intel 25GbE?

Of course the in-bound IPMI would stay, but TrueNAS really doesn’t appear to care about that one. Seems there is a history of multiple NICs on the same subnet with TrueNAS Core.

I adjusted my reply above to reflect that, you’re right that multiple NICs on the same subnet can be a problem, I was assuming something about your setup.

If you want to also have a Bridge:

  • Put one of the 25Gbit NICs, or two of them as an LACP, as a Bridge Member
  • Give the Bridge a Static IP Alias for the Host System
  • Add any VMs to that Bridge

The VM running Windows 10 Pro could easily piggy-back on that NIC assuming the VIRTIO driver exists for that NIC (which I’m guessing it does)?

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.