Issues getting 10gb to work

Hello

In short, I have a problem getting 10Gb working between my computer and my Synology NAS. I put this to Networking because I’m guessing it might be something to do with configuring the networking devices correctly in Windows/Synology?

A lengthy breakdown of the environment, with lots of information, is here:

https://community.synology.com/forum/7/post/125309

I got some suggestion there, but, unfortunately, no solution. I’m hoping we have more Wendell clones here who know that sort of stuff.

What I haven’t added there is that I’ve noticed that, with files in GBs+, the first 2-3 seconds the speed can be 300-600MB/s, then drops to crap, in few tens of MB/s to single digits to 0 for a brief moment and then, before the end, again for about 2-3 the speed goes up to 300-600MB/s… wtf?

Due to the fact that the very start of the transmission is ok, copying data in hundreds of MBs will move instantly.

Sounds like storage drives aren’t fast enough

1 Like

Seconded, sounds like some write buffers in ram fill up, and disks can’t keep up with writing.

Possibly a bad raid config in some way (unaligned writes maybe, but that should be hard these days)

Are you running nvme drives in your desktop for your test?
Have you tried e.g. crystal disk mark or some other benchmarking utility?

I have some concerns, the drives are 5400 rpm so that maybe an issue.

but to eliminate that reconfigure the ssd cache as a share and try and bench from that . and see what happens.

you said gb works fine though . so my next idea
is maybe the system nas needs more memory

also you may need good shielded cat 6a cable … could be your cat 6 cable you are using is of inferior quality.
so get some good cat6a and retry also consider cat 7 just to eliminate noise possibility all together

this should not be a factor , but maybe changing the pcie port the network card is in .

and last but maybe the easiest , the amazon review page for the card has a suggestion about changing the driver

Aquanta AQtion Network Adapter Drivers Version 2.1.12.0 WHQL

All disks on Workstation are Samsung’s NVMe drives. Samsung’s Magician shows that speeds and IOPS is way above what I achieve when moving data. This “should” be ok, in theory.

I have used iperf, which measures network throughput, without SMB in the middle and I was able to get only 4Gb/s+. But SMB speeds are way way below that.

WD Red’s are 5400, Red Pro’s are 7200. I have Red Pro’s.

The cabling is all shielded, so is the patch panel. I used another cable, also Cat6, and removed the patch panel from the equation, as suggested by people in Synology Forum Community… straight to switch, and got no better results.

Yes, that’s very weird as Gb is steady, no steep lags or spikes, whereas 10Gb starts sort of fast, then goes to 100Mb territory and then suddenly in the end it may jump up for a very brief moment.

I tried what the dude in Amazon review did and still the same. I don’t understand, whatever I do, the results are always exactly the same. No deviation. Starts kind of fast, slows down at the same time each time.

Anyone have in depth knowledge of Threadripper platform? I don’t. I have ASUS’s Zenith Extreme and 2950x on it. Maybe there’s something in UEFI I have to tweak?

I believe the bottleneck to be on the Synology NAS side.
I dont think its fast enough for 10Gb transfers.

It’s an Enterprise NAS, it should be. These figures seem to indicate that it is capable of doing way higher reads and writer that I get:

https://www.synology.com/en-us/products/performance#xs

The hardware used is nothing like mine, except the NAS itself.
Although it’s very odd that they chose to use Intel’s 10Gb NICs, instead of their own. :face_with_raised_eyebrow:

I’ll try what chaos4u suggested, I blew up the read/white SSD cache and I’m turning into a volume, put a share on it and see what happens.
Update: The same, no difference between two SSDs in RAID0 and bunch of HDDs in RAID6.


Do you people know what would be best in terms of availability AND performance, btrfs or ext4? All my volumes are currently ext4.
Update: Doesn’t currently seem to change anything in terms of speed.


I found a way to test I/O performance on Linux and tried this on HDD volume:

dd if=/dev//zero of=/volume7/testfile bs=5G count=1 oflag=dsync

That gives me ~350MB/s, where as the same on the SSD volume gives me ~690MB/s.

If its an SMB share, I think it may be waiting to complete the writes before moving on to ensure data integrity. Unsure if there is an option in Synology to temporarily disable this to test

Is ram or cpu pegging during 10G transfer?

Technically you should try a cat6a cable… how long is your cat6?

Did you run iperf?

Sorry. Reread and saw it.

Were you ever able to test directly without the switch?

1 Like

No, CPU and RAM are totally fine. At least according to the GUI and so far I have no real reason not to believe it.

I know that Cat6 has limits when the cable gets longer, the distances are so short I didn’t think Cat6a would have any point or justification. Cable from PC to patch panel is about 20 meters/65 feet maybe? I haven’t really measured it and I’m terrible at measuring stuff by eye :stuck_out_tongue: Cable from NAS to patch panel is about a meter/3 feet.

I’d like to test it directly, but it turned out I didn’t really know how this works, because I tried once… and I wasn’t able to access the shares. Don’t remember if the NAS itself was reachable. I did put myself manually to the same subnet, but yeah… maybe I should try it again.

Can you SSH into the NAS and check utilization, saturation, and errors?
First step should be to eliminate as many variables as possible, so direct connect, manual address, iperf test. Check top for user/system/interrupt/idle stats (and whatever the equivalent is on Windows! I know they have performance measurement tools, I just don’t know the names offhand)

Check netstat -s for anomalies. Again I think windows has a similar tool, might even be called netstat too.

Have you checked air flow on the 10gbe nic (I had a similar issue before not enough air flow on nic)

One thing to try could be changing the frame size. (Going to 8k/9k/16k MTU)… Saying this after seeing your iperf3 test

[_and yes, properly implemented drivers on a reasonably configured system should allow for 10gbps transfers even with typical 1500MTU; the oscillations are probably due to something somewhere getting overloaded and behaving stupidly, and that could be anything if we bring the complexity up to the level of network file transfers _]

Try random things or measure to find where the problem is :thinking:

1 Like

next thing is to eliminate the switch and just direct wire straight to the nas,

if that fails to fix then a different 10gbe card ?

and as far as thread ripper i cant think of anything that would be causing the issue with the card 16 16 8 8 should be enough pci express ports to run that nic wothout issue. and i do not believe any of those ports are shared …

only uefi setting i can think off is the pcie gen/version setting and maybe the above 4g setting but thats really reaching .

Have you verified that the E10G18-T1 is actually running at 10Gbe?
Maybe force the ports to be 10G.

1 Like

Set manual 169.254.0.X addresses for the 2 10Gb connections. Set the gateway on one end to the ip of the other end.

On Windows, PowerShell and GUI show 10Gb. I have also set the speed from Auto to 10Gb in the NIC settings.
In Synology, GUI shows 10Gb Full Duplex. ethtool says that one of the supported/advertised link modes is 10000baseT/Full

Thanks a lot dude! And everyone else for helping ofc!

I thought that MTU size and Jumbo Frame Size is the same. When I set the Jumbo Frame Size to 9000 on the desktop ASUS’s NIC, nothing changed in terms of speeds. But that’s probably because now I saw that for Windows the MTU was still 1500 (crappy drivers/software?).
When I manually set it to 9000, I’m getting constant 480-520MB/s speeds.
Which is far from ~90-95% of 10Gb, but it’s way way better than what it was yesterday.

I have to check what’s bottlenecking now.