ESXi Crossover Cable for vMotion (?)

I have a pair of HPE Proliant Gen10 Plus Microservers for my home lab. They have 4 x 1GB Nics with two on each unit dedicated for vMotion. It’s dreadfully slow, so I’m looking at getting some 2 x 10GB cards to pep the speed up.

Issue is, all of my 10GB switch ports are occupied. Is it possible to use crossover cables between the two hosts just for vMotion? In theory it should work, but wondered if someone had actually tried it.

mdi / mdi-x is mandatory - you can hook things up directly and use static ip numbering.

I assume they’re near each other physically, … you can try using SFP+ or QSFP+ (40Gbps) with short “twinax dac” cables. … might be cheaper or faster than 10Gbps copper / utp ethernet

2 Likes

I have them behind my TV so the connectors are about 4 feet apart.
I use Twinax DAC cables on all of my 10GB connections. Much cheaper than buying fiber and SFP. Plus, I have cats, so the Twinax are more resilient to their activities. :smiley:

Does direct Twinax allow for switchless vMotion traffic? Was thinking RJ45 so I could use a crossover. Open to ideas.

if you can ping stuff over the cable (same cable), vmotion will work.

1 Like

Correct, I just want to make sure a ping is possible on that config before spending $500 on 10GB cards and cables. :slight_smile:

Crossover cables stopped being a thing 20 years ago when NICs and switches started getting auto MDI /MDI-X capability as Risk mentioned. I also don’t believe “crossover type” was ever a thing on the SFP side, even for versions like twinax SFP cables.

What cards and cables are you looking at getting? Mellanox 10gb CX-3 cards are often about $20-30 on Ebay and used SFP+ cables can also be found there very cheap.

2 Likes

You probably don’t want to get 10GBase-T cards (copper). You’ll find 10G SFP+ cards much cheaper, and only need a twinax cable to connect them together.

1 Like

As of now, this is the leader. There’s RJ45 and SFP options. I just don’t want to have to replace my switches as well so RJ45 is the cheaper option.

Umm, didn’t you just say:

You can connect SFP ports directly together with twinax just as easily as RJ45.

I’ve been bitten a few times with incompatible SFP / SFP+ at work. The cost of shipping where I live is often higher then the cost of the stuff itself so shipping it back for a refund costs me more than I paid. As a result, I take my time and weigh the options.

Appreciate the feedback. Will be googling tonight.

My only concern with those 10GTek models is the RJ45 versions say they are Intel X540 chipset cards, yet the X540 was discontinued 3 years ago. So either 10GTek has a massive supply of the chips somewhere they bought or they are selling counterfeit chipsets on the NICs that are possibly reverse engineered clones. Counterfeit chipsets are actually very common from Chinese companies nowadays.

If going 40Gbps…

Then MCX354A-FCBT is the one to look for - for about 50-60-70 a pop depending on how lucky you are. QCBT might need flashing.

https://www.fs.com/de-en/products/30897.html is the cable to go to.


I don’t know otoh if you’re going for just 10Gbps - I think it’d probably cost you more than half the price of 40G.

I also don’t know how much throughput you’d get with 10th gen microservers, … is there any use case needing more than 1GB/s transfers.

Think of it this way. When I’m at work, if it takes 2 hours to download an ISO on 100MB company Internet, I’m getting paid for that. At home, I have the same file in 15 min because I pay for high speed Internet. That’s my time. 10GB is overkill for a home lab, but when patching servers I can drop the host into maintenance mode, migrate the active VM’s and be good to go in minutes. Then I can get on with enjoying my day off a lot faster. That’s worth a few extra $$$.

STH showed 10GB works well on the Gen10 Plus microservers. They really are decent hosts. Does everything I need and yet have Enterprise grade firmware and security updates. I test a lot of work stuff at home. Functionally, there is no difference between our Production stack and my home lab; just capacity. I can practice with zero impact if I make the wrong choice. Worse case scenario, I can rebuild everything in three to four hours.

40GB / 100GB is something I’d love to have at work, but budgets… At home, it makes no real sense. I’d be hitting power constraints running those speeds.

Grabbed two of these. They come with a cable so should work. Decent price as well. Could only order one at a time though. Bit of a PIA. Will report on how they work when installed.

in English for the non-network nerds: 1 GbE (1000BaseT) standard mandates auto-sense to turn straight into cross over if you plug two hosts into each other.

it was optional (and famously not supported by Cisco who just adhered to the 100 meg spec) for 100 meg ethernet. If you’re running gig (or above), you don’t need crossover cables any more.

edit:
cross-over is still a thing for fibre cables. if you go LC-LC for example between two hosts, you’ll need to flip one end of the cable; fibre does not auto-sense.

its also a thing in some specific circumstances for ethernet, for example fail to wire in-path network accelerator boxes (e.g… Riverbed Steelhead) but for the most part, with copper, as above - crossover cables are dead.

3 Likes