Ideas For A High Density Internet Solution?

I have a college football RV park that needs internet service. There are 800 lots sitting on 0.3 km^2 (74 acres) of land. Eight weekends a year, the population jumps from 50 to 5,000 people.

The owner has tried several WISPs in the past, but they couldn’t handle the load. He’s also talked to local telcos, but they want $240k to $423k per year ($25-45/month x 800 lots) to run cable internet.

Need to think outside the box.

Bandwidth

The bandwidth needs are kind of nutty. Everyone is tailgating, so either no one is using the internet or all 800 lots are trying to watch 3 games at once. I’m figuring they need 20 Gbps.

Auburn University’s campus (and a ton of fiber) is two miles down the road, but two miles in the other direction maxes out at 5 Mbps ATT DSL. Cable service at the park office is middling at best.

The owner built the RV park over the past 15 years between jobs. He does underground utility construction and has buried a good bit of pipe and conduit in the area.

So he’s totally capable of running his own fiber line into town, but how do you go about finding a wholesale uplink connection?

Distribution

I don’t think a wireless solution would work unless we offload most of the traffic to a wired network. There are about 80 electrical distribution panels in the park and the conduit has enough room to pull CAT5 cables to each lot.

That just leaves finding a way to connect ~80 outdoor switches back to the office. The farthest one is about 600 meters away.

1 Like

No idea about the uplink, although you might ask any contacts you made at the local WISP as they should know.

But for this:

These might be more economical and it looks like you can even get them with team branding…

Historically (and up to this day unfortunately) the solution for slow uplink with many downloads of the same data, was a caching and traffic shaping. A WiFi Mesh network would be and interesting solution, one that can load-balance the traffic through different paths to reach the uplink, but you would need really good APs (the WiFi 6 “pro” varieties that can hold many concurrent users intended for outside use, which are costly). I haven’t dealt with a big number of people before (and especially not with everyone being video streamers).

Also, depending on the area, 2 miles in any direction, you could be in another jurisdiction, so I’m not sure ISPs can be legally allowed to serve you even if you manage to lay fiber straight to their door (so, depends on your situation, you need to check your zoning laws and the jurisdiction/s that the fiber would pass through).

Back to the distribution, you could look for cheap 2nd hand fiber switches in order to reach the farthest lots and connect the shorter runs with Cat6A (please, if you are going to lay new wires, if it’s not fiber, at least be Cat6A for the interconnects, cat5e from the switches to the RVs should be ok though). But by this point, I think it will be easier and cheaper to set WiFi 6 APs, even if it’s not a mesh network: 4 really big unidirectional antennas in each corner and lay maybe 4 to 8 omnidirectional antennas in the middle to cover all your area (you probably need 3m tall towers for them). Connections between the APs and backbone can be made with fiber, media converters and POE injectors. Find APs that have at least 2.5 Gpbs Ethernet ports at the back. In theory, if everyone is equally balanced on the APs, you should have:

  • Splitting the people equally among 12 APs: 5000 / 12 = ~416 people / AP
  • Assuming traffic is maxed out on the AP’s 2.5 Gbps backport and also assuming that the AP can hold wireless traffic bigger than that, we would have 2500 Mpbs / 416 = ~6 Mbps / person.
  • 10Gps for the backbone should be enough. If we max out all the AP traffic at the same time, we should have 12 APs * 2500 Mbps = 30000 Mbps, which is 3x 10G, so if you manage to get a 10G fiber uplink, you would have about 2 Mbps / person if everything is maxed out.

Notice: those are numbers pulled out of my ass, with many assumptions made and this is an expensive endeavor no matter how you look at it, when it come to the distribution of the network. You are basically becoming an ISP, if you have to manage 5000 users. But not all’s bad, in the real world, not everyone is using their bandwidth at the same time, so it should probably work. The problem now is finding those expensive APs with long ranges that support lots of concurrent users. :man_shrugging: Also, don’t take this as advise, that’s me rambling. Maybe someone with more network architecting knowledge can help (I’m not a network architect… yet).

All of that makes sense. I think it’d be better to focus on the wired portion for now. I’d hate to invest in Wifi5 now, but at the same time there aren’t a lot of Wifi6 devices yet.

This is a weird edge case, where demand rises and falls in unison. Everyone sets up their tailgate, leaves for the stadium, etc at the same time. Then when the LSU, Alabama, and Georgia games start, they all stream 3 games simultaneously.

I think the WISP packages failed because they were designed for a sleepy RV park. This is more of a convention center, where there is 1 cell phone, 0.5 45" TVs, and 0.33 laptops per person.

I talked to CommQuotes. They’re asking ATT if they can put a 10GB drop into the park. I think 20GB is the peak, but streaming services are good about scaling resolution to bandwidth, so 10GB might be the best way to go.

Just a CS student with little to no field experience:

Why not collect local hookups into SFP+ and then run via Fiber (multimode via OM5 q 1Gbit should have a range of 500 to 550 meters)

From a quick search, outdoor SFP/SFP+ switches exist.
Mikrotik netPower 15FR or Ubiquiti EdgePoint

And pair of 48-port SFP switches to hand over to the router.

Edit: Would it make sense to run this with redundancy or is having a spare on shelf enough?


Everyone doing the same sounds like ideal use case for a big fat cache.

If it is 3 streams, I’m the type of person that would use NGINX-RTMP & have a machine that can capture the source stream and send it to the RTMP server. the RTMP server would then be streaming the feed over LAN.

That way you can save WAN bandwidth by only having 3 streams being received which is streamed back to a distribution server to be streamed among the LAN.


So it could look something like this:
All users will have to be on a /16 LAN.

WAN(Game Stream) —> Stream Computer(RTMP Stream) —> NGINX-RTMP-SERVER+RESTREAMER(Stream Output) —> Everyone on LAN

3 Likes

Is that possible with commercial streaming? Each stream would be encrypted individually…

1 Like

Depends on the encryption I guess. Usually each segment of the stream is valid for only a certain amount of time, so adding too many hops might cause a bunch of re-transmits or just not work at all.

For video streaming specifically it might be worth looking into something like MediaLive from AWS. It’s more of a software solution than hardware, but it sounds like this might be a variation on how to solve the “first mile” problem when you’re trying to broadcast a livestream over a bad connection.

The video streams would be coming from a variety of commercial streaming services, presumably over https. I don’t see any way to dedupe or cache that content without either getting the streaming services to install an on-site cache (ridiculously unlikely) or running some sort of proxy that broadcasts streams from a single account (probably illegal).

1 Like

Yeah it’s not working so well in reverse, is it. :stuck_out_tongue_closed_eyes:

You’d need something Zixi-like which can take in multiple points of ingress and spit them out to multiple points of egress, but that doesn’t do much to solve the bandwidth problem.

Caching is off the table. There’s no way I’m streaming licensed content to 5k people.

3 Likes

To bad you can’t just squidcache live content, would mostly deal with that issue without breaking any commercial licenses because everyone still needs to sign in to their own account to view the content.

2 Likes

shouldnt need to do that.

I run a rtmp server in my dmz (as long as the port is open…) and I can get to it with vlc from my phone over wifi+nat

I think the issue with what I suggested is licensing which puts it off the table anyways.

how do sports bars/restaurants have 29 TVs playing a mix of all different things?

Most of the time they have a license to rebroadcast that stuff.

The OP would need to get in contact with the networks to be able to do something like that.

Sports bars deal with ‘public performance’ licenses. They plug X number of TVs that are size Y over Z sq ft into a formula and out pops your license requirement/cost.

Don’t Ruin a Perfect Evening: Get the Appropriate Licenses for Radio and TV in Restaurants and Bars (wbklaw.com)

Caching and streaming would fall under reproducing a copyrighted work. You can’t just call the NCAA and ask for a license. That’s for big networks with legal teams and 500 page contracts.

what if you redirect the stream to your internal rtmp that the users get after they login with their account?

Caching violates IP law and is off the table.

10Mbps per device across 1k active devices (around 2k-5k overall connected) over wifi.

On a 500m x 500m area, is that about right?


As for upstream (or i guess more downstream), you’ll need fiber to somewhere, probably in the direction of the university, you’ll need to hire a company to hook you up, likely cost XXK USD if this is in the us. You’ll need to get some ISP to provide you with some permanent bandwidth + some additional bandwidth for events.

For wifi, you’ll need to start drawing a map, you’ll need a lot of directional antennas and you’ll need to do lots of testing.

I see you having to deploy probably anywhere between 10 and 50 of those BaseStation XG access points or similar Ruckus or Aruba. Basically, directional wifi antennas with high sensitivity and ability to lower power as needed. Some of them will have overlapping channels, and will be talking over each other, you’ll need lots of software tweaking to steer clients towards the best AP. (Force disconnect clients AP if they can’t maintain high data rates etc).

On site, you can just do SFP+ interfaces and fiber to aggregate access point traffic into your core switch.

Don’t forget to test.