In short, I have a problem getting 10Gb working between my computer and my Synology NAS. I put this to Networking because I’m guessing it might be something to do with configuring the networking devices correctly in Windows/Synology?
A lengthy breakdown of the environment, with lots of information, is here:
I got some suggestion there, but, unfortunately, no solution. I’m hoping we have more Wendell clones here who know that sort of stuff.
What I haven’t added there is that I’ve noticed that, with files in GBs+, the first 2-3 seconds the speed can be 300-600MB/s, then drops to crap, in few tens of MB/s to single digits to 0 for a brief moment and then, before the end, again for about 2-3 the speed goes up to 300-600MB/s… wtf?
Due to the fact that the very start of the transmission is ok, copying data in hundreds of MBs will move instantly.
I have some concerns, the drives are 5400 rpm so that maybe an issue.
but to eliminate that reconfigure the ssd cache as a share and try and bench from that . and see what happens.
you said gb works fine though . so my next idea
is maybe the system nas needs more memory
also you may need good shielded cat 6a cable … could be your cat 6 cable you are using is of inferior quality.
so get some good cat6a and retry also consider cat 7 just to eliminate noise possibility all together
this should not be a factor , but maybe changing the pcie port the network card is in .
and last but maybe the easiest , the amazon review page for the card has a suggestion about changing the driver
Aquanta AQtion Network Adapter Drivers Version 126.96.36.199 WHQL
WD Red’s are 5400, Red Pro’s are 7200. I have Red Pro’s.
The cabling is all shielded, so is the patch panel. I used another cable, also Cat6, and removed the patch panel from the equation, as suggested by people in Synology Forum Community… straight to switch, and got no better results.
Yes, that’s very weird as Gb is steady, no steep lags or spikes, whereas 10Gb starts sort of fast, then goes to 100Mb territory and then suddenly in the end it may jump up for a very brief moment.
I tried what the dude in Amazon review did and still the same. I don’t understand, whatever I do, the results are always exactly the same. No deviation. Starts kind of fast, slows down at the same time each time.
Anyone have in depth knowledge of Threadripper platform? I don’t. I have ASUS’s Zenith Extreme and 2950x on it. Maybe there’s something in UEFI I have to tweak?
The hardware used is nothing like mine, except the NAS itself.
Although it’s very odd that they chose to use Intel’s 10Gb NICs, instead of their own.
I’ll try what chaos4u suggested, I blew up the read/white SSD cache and I’m turning into a volume, put a share on it and see what happens.
Update: The same, no difference between two SSDs in RAID0 and bunch of HDDs in RAID6.
Do you people know what would be best in terms of availability AND performance, btrfs or ext4? All my volumes are currently ext4.
Update: Doesn’t currently seem to change anything in terms of speed.
I found a way to test I/O performance on Linux and tried this on HDD volume:
No, CPU and RAM are totally fine. At least according to the GUI and so far I have no real reason not to believe it.
I know that Cat6 has limits when the cable gets longer, the distances are so short I didn’t think Cat6a would have any point or justification. Cable from PC to patch panel is about 20 meters/65 feet maybe? I haven’t really measured it and I’m terrible at measuring stuff by eye Cable from NAS to patch panel is about a meter/3 feet.
I’d like to test it directly, but it turned out I didn’t really know how this works, because I tried once… and I wasn’t able to access the shares. Don’t remember if the NAS itself was reachable. I did put myself manually to the same subnet, but yeah… maybe I should try it again.
Can you SSH into the NAS and check utilization, saturation, and errors?
First step should be to eliminate as many variables as possible, so direct connect, manual address, iperf test. Check top for user/system/interrupt/idle stats (and whatever the equivalent is on Windows! I know they have performance measurement tools, I just don’t know the names offhand)
Check netstat -s for anomalies. Again I think windows has a similar tool, might even be called netstat too.
One thing to try could be changing the frame size. (Going to 8k/9k/16k MTU)… Saying this after seeing your iperf3 test
[_and yes, properly implemented drivers on a reasonably configured system should allow for 10gbps transfers even with typical 1500MTU; the oscillations are probably due to something somewhere getting overloaded and behaving stupidly, and that could be anything if we bring the complexity up to the level of network file transfers _]
next thing is to eliminate the switch and just direct wire straight to the nas,
if that fails to fix then a different 10gbe card ?
and as far as thread ripper i cant think of anything that would be causing the issue with the card 16 16 8 8 should be enough pci express ports to run that nic wothout issue. and i do not believe any of those ports are shared …
only uefi setting i can think off is the pcie gen/version setting and maybe the above 4g setting but thats really reaching .
On Windows, PowerShell and GUI show 10Gb. I have also set the speed from Auto to 10Gb in the NIC settings.
In Synology, GUI shows 10Gb Full Duplex. ethtool says that one of the supported/advertised link modes is 10000baseT/Full
Thanks a lot dude! And everyone else for helping ofc!
I thought that MTU size and Jumbo Frame Size is the same. When I set the Jumbo Frame Size to 9000 on the desktop ASUS’s NIC, nothing changed in terms of speeds. But that’s probably because now I saw that for Windows the MTU was still 1500 (crappy drivers/software?).
When I manually set it to 9000, I’m getting constant 480-520MB/s speeds.
Which is far from ~90-95% of 10Gb, but it’s way way better than what it was yesterday.