Need the cheapest way to 10G two computers together

Completely out of context in this thread, and I would argue that it was completely non-applicable to the topic under discussion.

So you assert that the linked article citing a 2GB windows server 2012 is relevant in 2018?

I noticed that none of those was a file server worthy of differentiating the performance of different OS’s in a LAN network role.

Well we can agree to disagree on what “build” means. To me “build” means actually putting fingers on it. Configuring an imaginary thing does not reach the threshold of “building” in my book. By your definition, Alice “built” a looking glass and the following universe. Also, when you say “these days” does that include when that article was written or what is available in 2018?

Funny, when I read reports that I consider unbiased, not a single one of them use a VM in the benchmarks. Are you suggesting that you can also benchmark the FPS of video cards with a VM, or just networking with a VM?

Is there ANY 2 GB VM that would be sufficient to compare the network throughput where the ONLY variable is the OS?

That may be your rules of engagement, but mine are different. My rules for engagement are as follows: Post in public, get challenged in public. Post in private, get challenged in private. I’m not accusing you of not contributing, I’m accusing you of being a pedantic dick-on-purpose while completely ignoring the actual context of this thread.

Are you serious? You actually think a LAN based NAS benchmark can be reduced to the media bandwidth? So that “raspberrypi” can correctly evaluate a 10G network? What are you smoking? Hell, let’s give you the moon and ask if that “raspberrypi” can serve 10 random files at an average sustained rate of 1G from a pair of 7200 rpm HDDs. That is a walk in the park for a proper file server and an absolute necessity to compare two OS’s in a file serving role.

Well this went off the rails.

Alright, lets go down the list.

Fair enough, but you don’t moderate this thread. IF it was actually out of scope, as deemed by a mod, then this can be remedied.

2012 R2 is still supported by Microsoft, so yes its entirely relevant in 2018.

As long as the network is up to spec, then through put would be limited by hardware and protocols used. SMB is best for windows, NFS is best for linux. NFS only works on pro and enterprise windows editions, so if you had a mix of clients in a professional setting then this would be the way to go. SMB in linux has performance issues.

Imaginary =/= virtual. Many people have a Freenas box running as virtual and pass a disk shelf to the vm. Quite common for smaller homelab setups.

Typically, in English, “these days” refers the current time. I.E circa 2018.

VM’s typically have a slight performance hit, but this is with computation tasks. If hardware is passed through directly to the vm, then there is no performance hit.

any linux distro, with iperf. For windows you would need to use filezilla and then transfer a large single file. I am unsure if they cross-compile iperf for windows.

Fantastic, however this is in the forum rules. I suggest you read them.
Link provided for convenience.

I was not trying to be a “pedantic dick-on-purpose”, I was trying to contribute what I though would be useful.

No I didn’t, I said for testing gigabit connections, not 10-gb.

People run Nextcloud on pi’s and small file servers for things like documents and small files. Works quite well as a low cost GUI solution.

The limiting factor for fileservers, is not the OS they run on (at least between linux and windows). Hardware being the same, they will performance the same. It comes down to the protocols they use. Like I said earlier, NFS is best for linux and SMB is best for Windows (currently, in the year of 2018). The only thing were the OS matters is the the BSD’s have a more simplified driver stack so they often get better performance on similar hardware when compared to linux and windows.

I don’t smoke, but I definitely want to drink right now.

Agreed.

1 Like

posting this as @ThisOldMan is crafting a reply right now . . .

Quit the bickering folks and get back on topic. No need to include snide remarks and jabs at folks when discussing different opinions.

5 Likes

I was not aware that NFS was for Enterprise Server only. Here are some Free NFS servers for Windows if you choose to go this route:

https://www.hanewin.net/nfs-e.htm

https://nfsforwindows.com/support-faq

If not, I hope you find the answers you are looking for and wish you the best of luck. LM9 signing out.

3 Likes

Taylord Tech’s video on the subject is pretty good. Explains SFP, different interconnect types, etc.

And if you happen to care much about the age of information, it’s not too old :wink:
He references iTechStorm’s series on 10Gb Home Nework which goes super in depth comparing SFP vs Copper and all kinds of setup and information. (A good watch if you have the time)

I never suggested that a mod needed to spank you nor that I am the king of correct speech, I simply voiced my opinon that given the title of this thread and the topic at hand, what you posted was off topic. Feel free to report me to the moderators if you think that it’s necessary.

But filezilla would be FTP, not SMB, right? A better test would be using SMB to cross-transfer several files at once, or at least a dozen random files in series. Just saying. Are we still arguing NFS vs. SMB (where I never had a dog in the fight) or are we back on the topic of network hardware? It’s hard to follow you.

Thanks for that. Can you highlight the part where using PMs are required when I think someone is off topic please? I can’t seem to locate that part. Actually, I can’t find anything in there mentioning PMs at all, but it’s entirely possible that I’m just daft.

And I would once again direct you to the title of this thread and mention that context is important.

I’ve never debated any of that either way except to say that none of it is on-topic, and the link you posted did not present a credible testbed for what they claimed to be testing.

I concur. To do my part, I vow to just ignore anything that is not actually on the topic of “Need the cheapest way to 10G two computers together”. Fair enough?

“[What is] the cheapest way to 10G two computers together[?]”

  1. Buy used Mellanox ConnectX-2 cards for Linux or Windows. Buy Chelsio SFP+ cards for BSD.

  2. Buy a SFP+ DAC cable from fs.com

  3. Tune kernel/config (Linux) (FreeNAS/Windows)

4 Likes

pretty much on point, thanks for that.

Just want to add a few things:
Mellanox connect x(2&3) VPI Infiniband Cards can do ethernet to. VPI = Virtual Protokoll Interconnect.
Connectx3 FDR14 HCAs support up to 40GbE or 4times 10GbE links through a splitter cable which can bring down the amount of needed Cards and cable.
There also exist QSFP(+) to SFP(+) transcievers who also allow for a mix off different Cards and or ports.

My current favorite is a IBM 544FLR FDR Infiniband Controller.
Cost about 30€ + shiping and hopefully dual 40GbE qsfp+ ports. Have to test those soon.

2 Likes

That’s all true, but would not qualify as “the cheapest way to 10G two computers together”.

OP is extremely adamant that the conversation be targeted to his specific requirements.

2 Likes

Be careful with exactly which ConnectX-3 you are getting. They have:

  1. A particular connector type - either SFP+ (which is best for your situation) OR QSFP (overkill for you and introduces difficulty)
  2. Protocol - either ethernet, infiniband, or both - usually it’s both, but I seem to remember seeing IB-only cards
  3. A speed rating - either 10GBit or 40GBit ethernet or others for infiniband (I think 8, 10, 40, or 56).

What I think you want is the ConnectX-3 CX311A - it’s a single-port card with a SFP+.
There exist other cards that have a QSFP connector but only push 10Gbit ethernet. I think that’s the CX354. You don’t want that.
I’ve seen some CX311’s on ebay for almost as cheap as the Connectx-2’s (around $20), but I think the seller offloaded them all. Note that there are also some ConnectX-2’s that use QSFP.
I hope this helps - good luck.

Interesting thread. I picked up some good info on 10G. I had no idea the pricing was so reasonable. I have always looked at the NIC’s with RJ45 and never considered it because of the price stores have on them.

1 Like

I wanted to follow up and let everyone know what I eventually was able to implement. It’s the old good news / bad news thing. First, thank you to everyone that contributed to this thread, I really appreciate it and it’s good to see that it was useful to a few others . Second, I apologize for being an ass, I’ll do better in the future. Here is what happened:

Nicehash went offline and took $60 of my BC with them which put a delay of a few weeks in the plan. During that time, those NICs became really scarce (which happens). The only reasonably priced ones I could find (when I was ready to buy) were refurbs and they were about 40% more than the new ones I looked at just a few weeks earlier (supply and demand). Adding salt to the wound was finding that the 7 meter DAC was out of stock as well and the ones I could find near that length were $80-$300 which is silly because DAC cables were easier to find and MUCH cheaper just a few weeks prior.

I might have drilled the 'net harder, but the increased price meant that I’d have to mine for a few more weeks to get the money, so it was a moot point until I had some more money. Meanwhile, Nicehash went back online and they were repaying that money over time as I continued to mine. So, by the time I had mined enough to pull some out (without insane transaction fees), the cost of the ‘cheap solution’ was very close to the cost of the brand new RJ45 NICs that I wanted in the end-game.

I was having a routine chat with my kid, he asked what I was up to. So, I related this whole frustrating situation of bad timing and the Nicehash bad luck AND bad timing. We had a good laugh about it and Murphy’s law. The next morning, I got an email from him telling me that he dropped $100 in my PayPal and he told me to get the cards that I really wanted and have a beer on him.

The bad news is that I never did get to build that cheapest 10G (which is undoubtedly the SFC + DAC as suggested by all the helpful people in this thread). The good news is I got to build what I wanted in the first place and it still got built many months earlier than I feared it would take.

So here are the bench numbers for what I threw together. The NIC drivers were tweaked to enable jumbo frames (hugely important), and the rec and xmit buffers were bumped up to favor performance over memory usage. The NICs were assigned manual IP addresses on a different subnet than the main LAN uses on the other NICs.

The Nas was set up with a 4GB ramdisk shared over SMB. The benchmark was ATTO v3.05 using a target file size of 1GB. The first numbers are with the NAS in full mining mode where 6 of the 8 cores are mining and the R9-380 is also mining.

Ramdisk while mining:
Write: 665 MB/S
Read: 919 MB/S

Ramdisk while NOT mining:
Write: 741 MB/S
Read: 936 MB/S

All else being the same, here are the numbers when testing the triple 1 TB spinning-rust RAID Zero ‘drive’ (which is the normal NAS operation) while also mining , because mechanical drives are so slow that the mining just doesn’t matter.

Spinning rust RAID Zero (65% full) while mining:
Write: 328 MB/S
Read: 304 MB/S

Now, the same tests running locally on the NAS itself while NOT mining (because the benchmark itself burns a bunch of CPU).

Spinning rust on NAS:
Write: 370 MB/S
Read: 344 MB/S

Ramdrive on NAS:
Write: 3323 MB/S
Read: 2962 MB/S

Conclusion:
The real world disk access from my workstation has tripled and the new 10G network has lots of headroom to RAID / ZFS a few more platters over on the NAS in the future. I’m very pleased with this result, and I’m sure that I could tune it a bit if I want to min/max things. The NAS can mine 24 x 7 to buy a few more platters, and the current setup is still fast enough to game over SMB so I don’t need to buy a bigger SSD when prices are stupid-high right now.

The next phase for the NAS will be to rid it of Windows and get it running Linux where it will eventually become a 3-way router / firewall, DHCP server, ZFS file server, backup server, and media streamer. All while being headless. I hope.

Once again, thanks to everyone that participated in this discussion. Sorry to have necro’d your thread.