8350 on GA-990FXA slow networking

I have a situation where one of my computers seems to have a networking transfer speed bottleneck.

The situation:

I have an i5 system that can use Acronis True Image and push and image from an 850 pro 256 to my nas at almost 900mbps.

My other machine, a 8350 on a GA-990FXA-UD3 with a Agility 3 boot drive with 8.1 only pushes at about 400mbps in this situation. I have also tried simply transferring a huge file and got the same speeds. I tried using a Intel EXPI9301CTBLK Network Adapter, and that did not change the transfer speed. When I watch system resources, I don't see a resource bottleneck in CPU, disk access speed, or anything. They sit next to each other, have the same 10ft cat 6 cables going to the same router.

Where should I look for possible issues here? I am using the fastest SSD available to me to do my best removing it as a bottleneck.

1 Like

I had some problems with the Agility ssd in the past. First I had an 64GB version that kept crashing at boot. So firmware update and it was fine again? No, now the speeds were very spiky. from 150 mb/s to 450 mb/s in minutes. So I replaced it with an Samsung SSD (840) and now the problems are over....

So I would suggest replace the SSD, in my experiense and that of an friend of mine to, it isn't an good SSD.

How do you measure network speed? Update BIOS? Update drivers? Check the motherboard, make sure you have the right driver for the on-board NIC. There's probably a realtek NIC on-board, some of those are a bit slow.

When i troubleshoot network issues I usually boot some live linux and run iperf or similar. That can help ruling out hardware/firmware issues. Drivers in windows can be very under-performing, I have a Giga board with an on-board Intel NIC, latest Intel driver stutters and causes system freezes, I have to use an older Ms branded driver to get rid of that. Just works in linux of course. Problem with firmware probably, Giga has just stopped updating the BIOS, it is possible to update the NIC firmware by modifying the BIOS.

1 Like

The way i do that is by very badly doing an NFS file transfer between 2 machines with SSD's. Not an good method but still found that the Agility is very incossistant.

If i run a SMB file transfer off the SSD between my Linux server and my windows box (also SSD) I get the full gigabit until the buffers run out, a couple of gigabytes IIRC. If I use RAM drives I get sustained gigabit. Intel NICs on both ends. Samba 4 and Win 8.1

1 Like

SMB is in my opinnion not an good benchmark because of the overhead and spicky performance. And you must cound in that an RAM disk is crazy fast, and that indicates that the ssd is the problem. So like i said before the agility SSD is I think the problem here.

SAMBA on linux can be tuned for very nice performance, but yeah the disk subsystem is probably a bottle neck. It usually is. I can do some testing later today and provide some data if there is interest.

There is definitly! So please do!

ehm storage bottleneck maybe?
AM3+ platform is allready an old platform, lets not forget about that.

I happen to have a GA-990FXA-UD3 rev 1.0 motherboard with a dual core Phenom2 on it. Ran Iperf:

:~$ iperf -c 10.42.0.1 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.42.0.1, TCP port 5001
TCP window size:  578 KByte (default)
------------------------------------------------------------
[  5] local 10.42.0.203 port 56984 connected with 10.42.0.1 port 5001
[  4] local 10.42.0.203 port 5001 connected with 10.42.0.1 port 53216
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  1.05 GBytes   903 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes   937 Mbits/sec

No problem on raw network performance, at least not on linux. Second machine has a Intel i350 NICs.

A simple download on the giga machine via lighttpd:

:~$ curl -o /dev/null 10.42.0.1:/test.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1024M  100 1024M    0     0   112M      0  0:00:09  0:00:09 --:--:--  112M

The giga-machine has a small and very old SSD, a Corsair F60. 10k+ power on hours :)

Writing to it is no problem though with btrfs:

:~$ curl 10.42.0.1:/test.img -o test.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1024M  100 1024M    0     0   112M      0  0:00:09  0:00:09 --:--:--  112M

So everything seems to work just fine. But as I said, on Linux :)
I'll fire up some tests on my Win 8.1 machine later if I remember it.

Not much wrong with the SATA controller on the SB950 chipset though. Most is fixed/workaround. Older AMD chipsets are much worse.

lspci gives me:

05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)

Kernel module is r8169.

Hmm those numbers look good on linux.
Hmm thats kinda interessting.

Linux is just awesome like that. Networking has had beast performance on linux for a long time, and it's only getting better.

Created a 40 GB testfile on the server, the giga-machine has 16 GB RAM:

:~$ curl 10.42.0.1:/40Gtest.img -o 40Gtest.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40.0G  100 40.0G    0     0   112M      0  0:06:05  0:06:05 --:--:--  112M

Happily chugging along :)

Load on the server was like 0.3, the giga-machine hit about 1.8 of load, circa 25% I/O Wait.

Now is my Win8.1 machine alive again (it's the machine in my specs, onboard intel NIC). I get full gigabit speeds from my linux server using SAMBA 4 with some perf tuning. It's not a domain controller, just a file server.
I get a little more raw performance to my Win 8.1 machine, probably thanks to the Intel NIC, the linux server is steadily sending 116-117MB a second if I transfer files.

I have Intel and Samsung SSD:s.

What kind of router? Some cheaper routers don't support really high throughput.

Edit: Nevermind, if they are connected to the same router and one machine can push a lot more data than the other, it should not be the router. Brain fart, sorry. Gamebase is probably right that the Agility SSD is the culprit, try a RAM drive just for testing. There is free RAM drive software for win, like imdisk.

The drive I am using as a source drive is the 850 pro. The agility just has the os. So the same 30gig file from the same SSD to the same nas has a 400mbps difference in speed. I will boot the 8350 from a linux environment and do a speed test there.

The benchmark I use is the reports in FreeNAS. That way the NAS is consistently telling me what IT'S doing.

I haven't used FreeNAS much, so I don't really know how it works etc. I did test to push files from my machine with the GA-990FXA-UD3 motherboard over the integrated NIC to my linux Samba server. It can push files to the server just about as fast as my Windows machine can.

smbclient on the giga-machine:

:~$ smbclient \\\`0.42.0.1\\widmark ##########
Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.13-Ubuntu]
smb: \> put 1Gtest.img
putting file 1Gtest.img as \1Gtest.img (114410.9 kb/s) (average 114410.9 kb/s)
smb: \>

~114 MB/s not much difference compared to the Intel Win8.1 machine.
(No idea why they output kb/s when it clearly is KiloBytes/sec, the get command outputs KB/s)

A thing to try in Win could be the Realtek LAN driver off Gigabytes support page. It's actually updated recently (April 2015).

The network tests I tried either in Linux or Windows showed about the same speed. I will investigate drivers further.

I am beginning to think that perhaps the issue is with the PCIe? Is there any other way to test the bus speed internally?

Dunno how you test that. Might be a bug in the firmware in the BIOS for the NIC. Upgrading/downgrading BIOS might change something. Look around on other forums for people with similar problems. Also what Rev of the motherboard are you on? It is printed on the board down on the left edge bye the last PCIe slot. I have my Ph2 CPU at stock 3.4 GHz. Pushing files consume very little CPU though, at least on Linux.

TweakTown has a forum with lots of BIOS tweaking/troubleshooting, might be worth a look/search.

It's rev 3. I will try the beta bios and then to downgrade the bios also.

Edit: Tried the previous and the next Beta bios; no change in performance.