Cat 5/5e/6 VS Fibre optic

Good day.

I want to eliminate local storage as much as possible and have my video games be played off of and installed on a ZFS Raid array.

My computer responsible for gaming and the server which will host these games are essentially right next to each other. The share the same 15U rack

So my question is about latency, in can’t seem to find anything describing what kind of latency occurs over about a 2m to 4m distance.

Is this something I need to concern myself with?
I guess I have the idea in my head that the latency is higher in this situation compared to local storage connected directly.

I don’t think the actual data transfer rate is too critical. I believe that 1000mbps would be fine.

So should I get a pair of pcie to spf(+) cards some cables and a sfp(+) switch?
or is the basic Ethernet switch that I have good enough?

Another question is:
Can I use 10Gb sfp+ pcie cards with a 1Gb sfp switch and 1Gb transceivers?
I am aware that not all switches will accept any transceiver and that the can sometimes be vendor specific. This is both a cost and execution hurdle to be considered and taken into account.

Because 10Gb sfp+ cards are kind of common as compared to any lower speed. But the switches are expensive.

So is that possible, to get a cheaper 1Gb switch and run them on 10Gb cards?

That would be a reasonably affordable fiber solution, I would surely get a latency reduction, I am no expert and can’t seem to find a clear figure.

I should preface and say that I realize that the latency my not be noticeable but its surely measurable?

I would love suggestions, or advice.
Does anyone have a setup like that?
Does it work well?

Thank you

It is worth noting that before I purchase a switch, I should contact the seller and ask if there are any licensing concerns.
Fully managed switches dont always have lifetime access and I saw on CraftComputing’s YouTube about 40Gb that the ports were not enabled and that a license was required.

I just want to make sure that this is a avenue even worth considering beforre I solve technical hurdles like that.

It should just work, assuming the switch understands the transceiver correctly (and no stupid software locks are in place).
With DACs, I am not sure.

Mikrotik CRS305 is cheap. 4x SFP+ and 1x RJ45 Gigabit (to hook your “storage-network” up to your normal network :wink: )

3 Likes

the latency of fiber optic cable will be a drop in a bucket compared to having a remote computer playing middle man messenger for loading games, you’re essentially doing all those calls twice, its all the computation that’s gonna cause the latency, not the travel

I’d keep your games local
for games that load everything at once with a loading screen it might be fine but modern games constantly stream data

2 Likes

You are mixing liberally Storage communication latency with network communication latency
Assuming we’re talking about storage latency, you need to be concerned about that if your game does a lot of local random disk io, and this is not typically a concern with games as you will have portions of high throughput/don’t care about the latency parts (loading levels, starting games) and then random access to probably fill an in-memory cache while you are playing
The added latency for network access when doing random io will be measurable if the media accessed is NVME/SSD, won’t be if the media is HD. Wether it will affect your in-game latency is a big ‘maybe, it depends, probably yes’ for me :slight_smile:

A gigabit link tops off at 120MB/s so it will definitely be a bottleneck if we’re talking SSD/s NVMEs and also if we’re talking NAS drives in a RAID config (200MB/s is what you should expect as a minimum in read performance), so I would suggest a 10Gbit link if you want throughput

Yes, but given you only have 1 client, I’d go with direct-attached Twinax Cables, much cheaper, less power hungry, you get 10Gbit out of 10Gbit cards (if you can stick a 10Gbit card in your nas, that is) and less latency because you are bypassing the switch

1 Like

Latency isn’t going to be a problem. I’ve been running my games from a server for years and it works great, I can’t tell a difference from running them locally. You can use local cache to speed up if it is a problem but if you have fast storage on your server and a decent network then you probably won’t see much of a difference.

I would recommend 10gb networking if you can, the bandwidth isn’t that important but 10gb NICs support significantly more IOPS than 1gb ethernet when using iscsi (which I would recommend) and this does make a difference especially to games which load a lot of stuff on the fly. Having said that it’s not unplayable on gigabit by any means.

4 Likes

On Windows, having a Switch in between from which “normal network” goes off will make things easier since there is only one NIC for Windows to use.

The search term for Windows NIC priority is “Interface Metric”

1 Like

Huh?

1 Like

Thanks for all the advice.

For some insight I intend to build a basic zfs Z1 pool
(no fault tolerance but its fine)
I have another server that I plan on backing up to for the saves and the like

This will likely just be 2X 4TB Hard drives (8TB total, 2X read write)
So the actual performance should be better than a single drive but still nothing crazy.

So I guess I will try with the switch I have now (The tp-link 8 port I mentioned earlier) and see what the performance is like, that way I can see if going fiber is worth it.

Also the motherboard that I plan on using has 2 10Gb ports on it already.
So that is a factor.

When you have two NICs with the same priority on a Windows machine, Windows sometimes fails to use the NIC that can see to the outside world. Having a single active NIC avoids that problem.

1 Like

you mean, if you team them?
Usually when you dedicate one nic to storage you use a different IP and network/vlan altogether …

2 Likes

I would get some modern(ish) server NICs and get the drives mounted up on your PCs via iSER (standard iSCSI but with RDMA extensions). That way the drives look like local storage to your PCs which wont cause problems. Some software throws a fit about a mapped network drive. The good server NICs with iSER offload capability will be able to bypass OS stuff causing extra latency through additional (and unneeded) data copies and serve the drive data over the NIC directly. An example of NICs with this capability would be Mellanox ConnectX-5 series, ConnectX-4 series, Chelsio T6225-CR, T520-CR, T520-BT (RJ45 connections), Intel E810-XXVDA2

The data path difference looks like this:

That, or set up a proper SAN and have SAS HBAs in your PCs connected with an SAS switch and have the switch go to a drive shelf with an expander card in it to connect your drives to. You can usually use two cables into the drive shelf for 24gb/s bandwidth to all your drives, and then each PC has a single 12gb/s connection to the switch.

1 Like