Packet loss on LAN one-way only revealed by iperf3

So I got into a discussion about the optimal bitrate to use when streaming games from my pc to my Nvidia Shield TV. And I got a suggestion to run iperf as a throughput test over UDP and strange results followed.

With my pc hosting and the Shield as client, my normal setup, I get these results:

Then flip that, with Shield hosting:

Any ideas why I get basically no packet loss with Shield hosting? How can I fix this?

UDP is lossy, if any of the buffers is getting clogged up, other than the one of the host itself, there’s no way for the sending application to know that packets are getting dropped on that buffer.

Btw, you can ask the client iperf3 for a test in either direction, as long as you get iperf3 listening on the other side.

The packet drop patterns smell like wifi trying to figure things out - when you use the term “LAN” do you mean “not over internet”, or do you mean “not wifi”.

It’s on a 1 gigabit ethernet.

I’m guessing if you let it run for longer it’s fine? e.g. try with --time 60 … it’s possible it’s just the kernel on the shield that might do with a tune.

It’s still pretty bad.

Edit:
Ok. So setting bandwidth to 1G brought packet loss down to 0.57% which is much better.

But is 4MB disappearing acceptable? More than half the loss was in the first transfer so I guess this ok?

UDP also doesn’t care/respect about ICMP so if traffic is too congested shit just gets dropped.

1 Like

Ok, but why does it only happen in one direction?

Because that’s the kernel that’s slower to increase the socket buffer size and/or might be CPU starved more

My pc can’t be more starved than a Shield that’s for sure. How do you suggest I fix it? Is there a fix even?

There’s nothing to fix really, use tcp?

How would I tell Geforce Experience to use tcp?

Whatever Nvidia is using will need to figure out how to deal with dropped packets (or tcp jitter) and adapt to that on its own. Afaik, shield is locked down so you can’t tweak the kernel and/or increase things like net.core.rmem* or wmem* (or however this thing works in Windows).

For your bandwidth testing, you can try with TCP and that will roughly tell you how much bandwidth shield can handle as an upper bound. When it’s streaming it’s using the CPU for other things, but knowing how much it can handle even if TCP it’s a good start.

It might also be possible that different versions of iperf3 behave differently when it comes to setting buffer sizes/socket options or detecting how much/how often to push stuff through, 3.1.3 on your screenshots show is pretty old (cca 2016), you should try to find a newer build, or if you can’t get it for windows maybe there’s a way you can run it with wsl2 or you could try Ubuntu in hyper-v and see if shield can stream to your VM without packet loss (could be a Windows UDP receive buffer size that’s broken).

Additionally, you’ll be streaming video to the shield, not from, these two things can load the CPU differently.

I have run a regular speed test before and the max I got between Shield and PC was 970-ish. It dropped to 945 with my new router.

I do see iperf is from 2016 but it was the latest on their official page. As far as I can tell new builds are only for Linux.

And yeah streaming to the Shield is why I’m worried about the packet loss, testing other way was just an experiment, with a surprising outcome.
Maybe this is due to differing versions. Unfortunately the one I use on Shield doesn’t output it’s version, it’s bundled with this app called Aruba Utilities.