Need the cheapest way to 10G two computers together

I just figured it out a few hours ago, if you want to share information from a different web site hit the upload button instead of the share link button.

You also have to be a trusted member user level, basic members links will not be auto-followed / auto-fetched.

3 Likes

Thanks for clearing that up for me and the rest of the guy’s posting on this forum @Dynamic_Gravity. While I got your ear, I have a quick question, How do you send a PM to any member of this forum? Never mind I figured it out my self.

I want to point out the link must be on its own separate line. Here is an example:

This is the link: https://www.newegg.com/Product/Product.aspx?Item=N82E16833736040

VS this:


They’re the same link. The second one is just on a new line.

The link is the one in the first post.

3 Likes

wish you’d asked this a few days earlier (or that I’d had the presence of mind to ask). just finished ordering parts for my own point-to-point fiber connection (desktop ←→ NAS):




seems EVERYONE recommends Mellanox ConnectX-2. Qain recommended the Finisar transceivers. Got the wall jacks only because… i’m running the fiber through the wall

hints (things i wish i’d figured out sooner than i did):

transceivers: look for 10GBE SFP+ “SR” or “SR/SW”. there are “XFP” transceivers that looked (to me, anyway) like they’d work, and were cheaper, but i was advised against risking it. (…after i’d ordered them.)

fiber: look for “LC” (connection type), “duplex” (two fibers), “multimode OM3 (or OM4)” (capable of 10GB). they’re aqua colored (orange cables are no good).

good luck! I’ll let you know how setting mine up goes. everyone says it’s “surprisingly easy”!

1 Like

XFP are a larger module, they won’t fit in SFP slots, not to mention that they’re not electrically compatible either.

2 Likes

The Mellanox ConnectX-2 are relatively cheap, and they seem to be everywhere. However they are not compatible with PFSense because Mellanox stopped supporting the older cards on BSD. Though they still work on FreeNAS.

Chelsio cards are higher up on the list for universal compatibility. I have one on the way to put in the Firewall.
A quick search on Ebay can get you one for relatively inexpensive.

3 Likes

given matching transceivers, is it important to have matching nics?

You don’t “NEED” matching transceivers. Just matching modes. (Single Mode or Multimode) it’s gets a little more complicated than that, but as long as the transceivers are the same mode and configuration you should be good. (getting a matching pair is easy)

I currently have a 1gb/10gb intel SFP+ transceiver connected to a 10Gb off brand. They work just fine.

all mine are connected to a switch.

If your servers are right next to each other, you could also consider just getting a copper interconnect… but that doesn’t sound as cool. :smiley:

Side note. @ThisOldMan While 10Gbps network is nice, if your hard drives/SSDs are not able to read and write at 400MB/s you won’t see that even in 10Gbps… (But you will know the network isn’t the bottle neck. :smiley:

4 Likes

Dont you think you can get around that caveat with PFsense if you run a different Linux OS that supports the mellanox and then using network bridges and KVM, create a virtual pfsense VM ?

ok; that’s what i figured. cool.

for now I’m only having a point-to-point link (and the boxes are in different rooms/floors, so fiber it is), but when i graduate to a network i’ll have choices to make. for example, whether to get a 10GB switch or just use the one I have (which already has 6 2.5GB SFP ports).

@ThisOldMan sorry to derail your thread!

OR, you buy an equally inexpensive, supported card.

what you are describing could work, but is more work than it’s worth, and has no benefit.

I’d recommend the Mellanox Connect-X 2 VPI. If you use Infiniband instead of Ethernet, you can setup NFS over RDMA, which has much less overhead compared to NFS over UDP/IP. Useful for very fast SSD RAIDs or large RAM disks.

2 Likes

I was going to go the crazy route and suggest bonding several NICs together on the switch and computer. If you have several Gig NICs and you bond them together you can make a hackjob 5 or 10g network in theory

HELL YEAH quad nics in every slot

Thanks @DeusQain, I have 3 [old] Samsung hdds in raid zero, which reliably provide 380 to 440 MB/s reads on large sequential files (2GB+). Obviously, sustained writes are less, but only when they break the write-back cache. Unsurprisingly, the shorter (cached) writes are actually faster than the reads.

I plan to add a few more hdds and migrate to a raidz array in the future. The network is the current bottleneck and will be even more so later. Even a 4 x 1G network config would be a partial bottleneck with the current disks, so the ‘new’ network must be 10G (or better) in preparation for the future raidz array.

Though not as ‘cool’, I only need a 6 meter run, so I’m going copper as recommended by you and many other helpful people in this thread.

Thanks for your input, it’s much appreciated.

1 Like

No problem, I appreciate all the info in this thread and consider everything as useful knowledge. :smile:

I considered that. But in my case, I have only 4 unused ports on my switch, so that adds the cost of a bigger switch or 4 x crossover cables for point-to-point. I already have a handful of extra cat-6 cables, so if I had 8 free ports on the switch, I’d probably go for 4 x 1G in the very short term.

Once you add the cost of a bigger switch or the cost of the crossover cables and factor in the PITA to get SMB to run over multiple IPs, it doesn’t look quite as interesting. Even then, the 4 x 1G would bottleneck my current NAS and be utterly useless as soon as I add more hdds to the NAS in the near future.

I have to say that with several extra ports on the LAN switch, and the inexpensive quad 1G cards available, there is a case where this config has merit. For example, if I was building a budget DIY NAS with 2 hdds in raid zero, then 3 x 1G would [just barely] put the bottleneck at the NAS, not the network.

Assuming that I already had a box full of cat-6 cables and a huge switch, it might be cost effective for a 2-drive NAS. Alas, it just doesn’t scale well after that point. Add a third drive to the NAS, and the bottleneck lands squarely on the network, even when using 4 x 1G. Add a fourth drive to the NAS and it’s game-over. Not to mention the cabling nightmare of 8 more cables over to the switch.

Not sure where the crossover cables came into play but realistically it shouldn’t be hard to get a NAS to work with more than one NIC. Load balancing is quite common a feature. That is basically what the bonding does. By load balancing across the 4 1G NICs, you are effectively getting a 4Gigabit link to your network. Your NAS can transmit data as fast as it can push it from the drives. Remember, a Hard Drive won’t actually read/write at 6 Gigabytes per Second. They usually only do about 100 Megabytes per second on average. Even on RAID 0, you are hitting around 300 Mbps.

Well, either I have 8 unused ports on the switch (and 8 cables handy) or I simply bypass the switch altogether with 4 crossover cables direct to the NAS. Four crossover cables are cheaper than 8 cables plus a new (bigger) switch. That is where the crossover cables came into play.

As mentioned previously in this thread, the ‘NAS’ is my old FX-8350 desktop running Win 10 for now, and soon to be upgraded to a Linux ZFS platform with twice the HDDs. The ‘quite common’ NAS feature set, is whatever I configure it to be under Linux.

Load balancing is a completely different animal than channel bonding, and recent linux SMB implementations seem to struggle with proper channel bonding. Apparently, it’s the one thing that windows does better than Linux at the moment.

If you scroll up two replies, you’ll see that my ‘NAS’ is quite happy to pull 400+ MB/s (sustained) from the HDDs at this moment in time, and I plan to reconfigure it to a RAID Z array and add a few more HDDs to the mix. So, 400 MB/s of network is not going to cut the mustard.

My HDDs can read 150 MB/s sustained sequential EACH and I have THREE of them in a RAID zero config, soon to be 5 or 7 HDDs.

You said: “Remember, a Hard Drive won’t actually read/write at 6 Gigabytes per Second. They usually only do about 100 Megabytes per second on average. Even on RAID 0, you are hitting around 300 Mbps.”

For the record, I have to correct you by saying that SATA3 is 6 Gb/s, not “6 Gigabytes per Second”. https://www.webopedia.com/TERM/S/sata_3.html

When you say “They usually only do about 100 Megabytes per second on average”, I would say that my HDDs are 150 MB/s sequential read sustained, but I grant you that all HDDs are non-linear (depending on physical location of data), and RAID zero is also non-linear as it tries to scale to multiple drives.

However, it would take several more drives in a RAID zero config before they ever drop down to 100 MB/s “on average” for a sequential operation. As for writing, well, that is cache dependent, and a whole lot more complicated. The hugely significant factor of head seek time is also something to consider, but let’s not bother. Let’s just stick to “max MB/s dumping onto the network” as the metric of importance to stay on-topic.

As to your final comment: “Even on RAID 0, you are hitting around 300 Mbps.”

First, I’m going to assume that you meant 300 MB/s and not “300 Mbps”, to give you the benefit of the doubt. In either case, you are conflating “RAID 0” and the specific case of two (apparently terrible) HDDs in a RAID zero configuration. RAID zero means two (or more) HDDs in a stripe set with no parity. In this particular case, I have THREE drives in my “RAID 0” stripe set, (soon to be 5 or 7 drives in RAID Z).

My RAID zero config already hits 450 MB/s, so a 4 x 1G network would ALREADY be bottle-necking the data. What would you imagine will happen when I add the other 4 drives to the RAID Z array?

You don’t need crossover cables, all gigabit NICs have automatic whatever you call it so you can connect any two devices and they just work it out.

But link aggregation does not work the way it’s being described here, you may get a total of 4gbps but that is for multiple devices, for any traffic between two hosts you can only get a maxium of 1gbps. So in a point to point configuration it’s completely useless save for fail safe. With a switch and multiple clients, if they are all accessing the server you will see more than 1gbps traffic, but each host will still be limited to a maxium of 1gbps.

SMB multichannel may be something you want to look at instead. But honestly for a point to point link just get some 10gb NICs and save yourself a lot of headaches.