Lan party networking

Me and a few friends are planning to arrange a Lan party with between 50 and 100 people next year.

It’s gonna be a pretty big event for our small town and i want to make sure everything is a success.

I’ve talked with a local ISP, and it sounds like they would be willing to sponsor us with a 10/10gigabit network connection (that is sooo awesome)

But we have to provide our own networking gear past the modem.

Any recommendations on networking gear for the lan? I don’t mind shopping used/old equipment.

We want everyone to have a cabled connection if possible.

2 Likes

You can pick up a stack of 24/48-port switches for fairly cheap off eBay. If you want a more reputable source then I am a huge fan of Unix Surplus.

You are going to need a heck of a router too. I would go for a purpose-built pfSense box.

Just going to cc @DeusQain because he does this.

2 Likes

I’ve heard of a slightly larger event getting support from FS.com. Maybe worth contacting them?

In theory plug them in, provide DHCP, and off you go. However in reality you’re probably going to have a few bad eggs. (DoS on the network, hogging bandwidth with torrents, general shenanigans). I’d suggest trying to find someone with experience running a large LAN so you can learn some ways to combat these kinds of issues.

You want managed switches so you can monitor port usage, turn people off, prevent specific layer 2 attacks, and prevent network loops.

You’ll also want some kind of internet login system so that attendees have to login for internet access. This means you can monitor internet usage (not just local LAN usage from switchports), kick users off if they’re doing something bad, and identify IPs with people. In additional you’ll want a content inspection system so you can find users abusing the internet (IE: bit torrent).

The way we used to do it at my university for the Campus LAN (roughly the same size) was to ensure every attendee was 25ft or less from the closest switch. Can maybe push that to 30ft if you have some longer patch cables on hand.

  • Cheap & Used: Quanta LB4M’s (48 port 1G RJ45, 2x SFP+) for access and Quanta LB6M’s (24 port SFP+, 4 port 1G RJ45) as core.

  • Cheap & New: Mikrotik
    CRS326-24G-2S+IN
    (24 port 1G RJ45, 2 port SFP+) for access, Mikrotik CRS317-1G-16S+RM (16 port SFP+, 1 port 1G RJ45) as core


Troubles to avoid:

  • Management ports look very tasty for some reason, and people will try to plug their network cable into console ports. Duct tape is your friend!

  • Cables running on the floor are a tripping hazard. Duct tape works, makes the cables sticky though. Cable bridges or “tunnel tape” (= duct tape with a section of non-sticky running down its center) works wonders.

  • The first few hours will be a lot of outbound network traffic (on average, our LAN moved just over 2 TB over its course (Friday Evening to Sunday Morning)). After that, most games have updated.

  • At least two weeks before Go, have the network fully cabled and configured for a test run.

RADIUS is your friend! FreeRADIUS worked great for us.
The network-security aspects such as Firewall for 10G speeds is going to be pricey. DIY will be way cheaper, is a lot more work though.

2 Likes

Three of these:

You can stack them using 40G ports between them, they have 10G connectivity for the uplinks (and a local download cache if you want), there’s a thread on STH about how to upgrade them/unlock all functionality (10 and 40G ports are license only in theory, but these switches are EOL and you can just enable them following the instructions)

40G cabling is 10-15USD for twinax 2m, and 40G PCI-e cards for servers are 40USD (Mellanox ConnectX-3)
For the router/firewall I’d suggest VyOs on a bare metal supermicro like this one:

with two xeons and a suitable amount of ram, plus ttwo 40GB cards
You will be able to route at 40Gbps, firewall … meh …it will depend on the number of rules, should be able to do 10Gbps up and down …
But yes, a buttload of work to make everything run smooothly, but you said you don’t mind used/budget gear :slight_smile:

On the firewall, if you want to go with an appliance, the cheapest option will probably be a Mikrotik CCR1036-8G-2S+
that can push 10Gbps when routing and filtering (barely, if you don’t go heavy on the filters)
Anything else and youre talking 5K and up (Fortigate/palo alto, checkpoint)

So your issue with prices is getting density of GbE ports, and 10GbE uplinks.
For ~100 people you’re looking at 2x 48 port switches.
Going on the used market something like a 3750X can be had for ~$500CAD give or take. Check local used shops, as they might be cheaper without shipping. Those are stackable, so they act like one big switch and do L3, so you can do everything on them if you like. There are additional security considerations I’m completely ignoring in favor of going cheap and fast.
So $1000CAD would be the price for base infrastructure. If you want servers I’d beg, borrow, and steal.
If you want 10GbE firewalls… good luck.
Edit: don’t forget to factor in cable runs for network and power, and tables and chairs, if those aren’t provided by the event space.

1 Like

I suggest you also plan ahead what games you want to play and probably make multiple (steam) cache servers available, in case people need to download / upgrade games. 10G isn’t that much when 100 people may try to download stuff at the same time.

I guess other people can give better input than me how to layout the network so that latency is ideally around the same for everyone or at least you don’t have major bottlenecks. You probably would need to do some QoS setup.

Have enough people around for network/troubleshooting and enough spare cables and ethernet connectors (, and some security person in case some people freak out when they lose. The more people there are the more likely it is to have a person who freaks out)

3 Likes

The ones I linked (Brocade) are 200USD each, for the POE version, and have 40GB uplinks … Ciscos … very good hardware, I hate the firmware download only if you pay or know a friend of a friend …

1 Like

Ooh, and those are L3. That certainly looks like a better option.

1 Like

At those speeds, downloads are mostly auto-throttled by the servers providing them. It is actually fine.

2 Likes

yes, that’s why I suggested multiple cache servers. I would put maybe at least one per switch depending on the switch size (and maybe bond multiple interfaces together), to keep the downloading traffic off the inter switch backbone connections.

Will each person bring his own PC/Laptop or will the game computers also be provided?
Have you done anything similiar in a smaller scale?

Don’t get me wrong. I find your initiative great for organising such an event, and there’s luckily still a lot of time for planning.

1 Like

Maybe there’s also a local MSP who can lend/rent you the switches for a couple of days and help you set things up.

Otherwise, I’m thinking 96 ports in the form of CSS326-24G-2S+RM … or maybe the CRS version… because of resale value, and they’re not super expensive at 6-ish / 8-ish per port. They let you keep your 1Gbps copper runs short, they let you isolate ports and block DHCP, and you can grab some metrics off of them and build yourself a dashboard.

You’d daisy chain two pairs, maybe 3 pairs if you want 100+ ports and one end would go straight into two/three 10Gbps ports of your pfsense router box, other end would go straight into your lancache (aka. a boxfull of SSDs running nginx).

Your lancache box would connect to your router directly, and each chain too. Second hand SFP+ nics are cheap and plentiful.

This topology means everyone on a chain (48seats) shares 10Gbps in one direction of the chain, or in another, so 40MB/s lancache… or 20MB/s lancache+20MB/s internet?

Hopefully that’s sufficient, if not… there’s cheap 48 port qsfp+ switches out there…

2 Likes

I think the OP has to decide whether he/she wants to Yolo and just go with a flat setup, no local servers/caches, minimum effort firewall/security config and hope for the best/hope the attendees behave or … use it as a learning experience …

The switches I linked (Brocade 6610-48s) are a godsend for the latter, because at 200USD a pop you get:

  • 48xGbit ports for clients
  • 8x10Gbit SFP+ ports for servers
  • 2x 40Gbit QSFP ports that can be used with breakout cables for another 8x 10Gbit SFP+ ports
  • 2x 40Gbit Stacking ports

The switches are Layer3 capable and (unlike the mikrotiks) they support routing at line rate speeds on each port!
If you combine that with the dual 40Gbit stacking you have an unbeatable price to performance setup
The catch with these switches is that they draw 120w at idle! and generate a lot of heat and noise, so you don’t really want them in your closet/room/house unless you have a dedicated rack in a basement and can afford the electricity cost

For a LAN party, where you would need:

  • A lot of ports for clients
  • 10Gbit ports for local cache and game servers
  • 10Gbit ports to the firewall
  • some sort of separation between gamers and game admins
  • some sort of internal security filters between the gamers and the servers, and the gamers themselves

You could leverage the Layer3 capabilities of the switches (that at this point become more like routers) and create a setup like this:

Where you section out gamers on separated VLANs, this example has 8 gamers on each VLAN (ideally one VLAN per desk), a dedicated VLAN for admins and a dedicated VLAN for servers

With this layout you could set up access lists on the Brocades to, say, allow only port 80 and 443 to the game servers VLAN (and all other game ports), section out access to the admin VLANs and section out access to internet (ideally dns and HTTPS traffic only), without bothering the firewall that will have its problems handling nat for a 10Gbit link
You would also have 24 10Gbit ports for servers and 6 40Gbit ports for cache servers, and the means to push data through the servers and the clients at line rate

The learning curve to set this up and secure it would be steep, but once you get it, this scales easily to hundreds, possibly thousands of machines.

If you want the cherry on top of it you could also set up monitoring on the switches so that the admins can pinpoint data hoarders:

Heck, you could even make it become a source of income if you really got good at it :wink:

1 Like

The second hand market (pre-loved brocade and quanta) is awesome, if available to the OP…

Did the OP say which part of the planet they’re from?

1 Like

I came to say I miss lan parties lol been over 15 years since I participated. We use to take over the dining room and living room. Filled the up stairs with computers, folding chairs and tables. Pissed my mom off lol aaah the glory days.

3 Likes

Due to not being as much into gaming when I was younger I never had the experience to go to a LAN party.

Had a little one a few weeks ago with some close friends though. Only ended up being three of us, and not too many games I really enjoy on my own, but damn did I have fun.

I’m starting to realize what I missed out on…

2 Likes

Not that I think of it as a super critical issue for a one-off event, but anybody got an effective way of blocking bittorrent at 10Gbps without bankrupting themselves?

Have you looked into HTB+SFQ?

e.g. if it’s outgoing bandwidth, you could ask SFQ to classify traffic using src ip, that way the 9 machines uploading backups at 1Gbps get the same share of bandwidth priority as 1 machine with hundreds of tiny torrent flows… thus making torrents mostly a non issue for most users.

https://tldp.org/en/Traffic-Control-HOWTO/ar01s06.html#qd-sfq-parameters

It is possible to use external classifiers with sfq, for example to hash traffic based only on source/destination ip addresses

$ tc filter add … flow hash keys src,dst perturb 30 divisor 1024

Basically, flow hash keys src would do what I’m describing for outgoing traffic.

Incoming traffic can get a bit harder in theory, in practice most torrent clients are not as dumb nor as badly behaved to request thousands of chunks in parallel from thousands of other hosts and ddos themselves (it just doesn’t work well, unless you’re storing data onto SSD which is expensive or you have a very large raid, for lots of large torrents using small chunk size - which puts extra load on trackers and is uncommon).


Are you worried about occasional bandwidth bursts? … or is there another reason?

1 Like