Return to Level1Techs.com

Benchmarking 10g Netowork Speed

networking
benchmarking

#1

I have 2 computers directly connected to each other using 10gig SFP+ and I wanted to test that I could actually achieve 10 Gigabit speeds. However, I have the feeling that my testing is flawed as I keep running out of CPU after achieving just over 4 Gbps as shown below:

Workspace 1_412

To perform the test, I am running 4 of the command below in parallel from the other server (to use all 4 threads)

rcp 10GB.img nas.programster.org:/dev/null

I created the file with the following command:

dd if=/dev/zero of=10GB.img bs=1M count=10000

I have already checked that both servers are running the link at a 9000 MTU which I thought would have helped.

Is there a way to transfer data over the link whilst requiring less CPU (the reason I’m using rcp instead of scp), or is my processor just not powerful enough being a Pentium G4560t which is actually dual core hyper-threaded? I had thought that being fairly new and at 2.9 Ghz, it would be pretty efficient at just handling network traffic.


#2

If you just want to measure the throughput you can use iperf.


#3

iPerf is a great tool


#4

Thanks ill check that out :slight_smile:


#5

Excuse my ignorance, but you are sure the test file can be read faster than 4Gb/s too? Just in case bottleneck is on the sending side?
[Edit: like the img file is held on a RAM disk I mean, rather than just in /home]
[Edit 2: dropped speed by almost an order of magnitude]


#6

Always a good thing to look at. In this case, it’s /dev/zero, so I’d presume it can be read pretty dern quickly.

Other things that can drive up the CPU load with high network load are encryption (not sure if rcp uses it), and sheer packet size, depending on the NIC. I would presume that a proper 10Gb NIC would handle creating the packets, but you never know.

I’d say run with iperf for right now, that will test to make sure that you can achieve line speed. Then you can do file transfers to find out how much your disk is bottlenecking your wonderful interface.


#7

I used rcp instead of scp on the understanding that it doesnt use encryption.

I am not reading from /dev/zero but from a file that was generated from reading from there to make in the first place. However, I don’t think reading from that file is causing a limiting factor (yet) since the htop output is only showing green bars. If it was waiting on disk, I’m pretty sure it would be showing grey bars as I already switched on the “Detailed CPU time” option.

I’ll have a whack with iperf and see what that outputs. Either way though, 4gbps is enough to saturate my ZFS RAID10 array of 6 drives so this is purely a benchmarking excercise than any actual need.


#8

I only asked about the file because it looked to me like /dev/zero went to 10GB.img file, then the file 10GB was rcp ‘d to remote host’s /dev/zero.
Wasn’t sure if I mis-read is all…


#9

No you’re absolutely right and reading from the file could legitimately have been a bottlneck. However I’m reading from a pretty fast SSD and my note about grey-bars/htop makes me think its not the read speed being a bottlneck just yet. However beyond 5 or 6 gigabit it probably would be.


#10

Cool. Also I now see your post, must have replied while I ham-fistedly struggled to type on my phone…


#11

Just being nosy, but what distro/OS are you using locally and Nas?
Looks like you got the networking down almost square (bar tweaking speeds,) and there’s growing interest in 10gbe if you felt like a blog/post on how you set up?


#12

If iperf confirms that you’re only getting 4Gb/s, then you might need to do some tuning.

45Drives has an article here which shows some kernel tweaks in FreeBSD. I have no idea if/how those would translate to Linux, but it’s a place to start.

I would also check and see if a firmware update is available for your card, and see if there’s a newer driver available from the manufacturer’s site.

Are you using officially supported transceiver modules? I’ve never seen a module affect performance (either works or it doesn’t in my experience), but you could try swapping them out if nothing else works.


#13

My nas build is pretty much like my NAS blog post except that I have bought a NIC from ebay based on advice I got from this forum. Funnily enough LTT made a video about doing it this way a few days/weeks later as it is the most cost effective way to get 10Gbe.

Distro: Ubuntu 16.04 because I’m lazy and really easy to set up with ZFS.

SFP+ is a great way to get 10Gbe as I believe its optical rather than copper based. However this does mean that its hard/impossible to wire up a building gracefully with it (can’t just cut it to length like Cat 5/6/7). I just have a lose wire hanging between my KVM and my NAS servers and I use a separate 1 gigabit lan for internet access.


#14

Technically, you can cut and terminate om3/lc although I have never done it.

What modules are you using?


#15

Aren’t the cables for SFP+ a lot thinner than Ethernet? Just loop the excess cable (not too tightly) and Velcro/ tape out the way?
Unless it’s like DAC runs, which seem a lot thicker


#16

By DAC, I mean the cable with built in modules either end of the cable (which could be copper or fibre) and a.m. ready to be corrected.
Also thought fibre cables are a lot lighter too, for tucking away


#17

Good to know. I doubt I would feel comfortable enough to do this myself, but happy to pay someone else to wire up my flat one day if they know how. Getting a 10 gig connection between my office and my server would be a huge bonus.

With regards to modules, I’m a bit lost. I am guessing there are ZFS modules? I just installed it using the zfsutils-linux package.


#18

@trooperish, yes the cables are quite thin and light and I believe it requires a lot less power which are all huge bonuses. I have a 3 meter cable going between two boxes that are 1 foot apart. I just have to get whatever length is cheap and available on ebay.


#19

Skip putting the file in disk… Just DD directly over ssh/rsh


#20

Ha, sorry, not that kind of module.

Transceiver modules are the things at the end of the fiber optic cable. They look like this:

main

Historically, some SFP+ cards were extremely picky about having only officially supported modules. This is less of an issue lately, but it can still crop up. I don’t think the ConnectX-2’s are especially picky, and if they were, you wouldn’t get any connection at all. But if you want to eliminate all variables, make sure you use modules officially supported by the card.

Here is the data sheet for the ConnectX-2.

07 PM

The module you want is MFM1T02A-SR (SR = Short Range).