25gbps Fiber upgrade for the lab

Good Evening Folks, I’ve been looking into upgrading throughput speeds for my desktop and NAS, as well as adding a router i actually have full control over and could properly setup OpenVPN with instead of hosting that on the NAS. i was looking into 10gbps, but noticed that i could find 25gbps PCIE cards for around the same price.

During my research i learned a few things:

  1. My PC has single PCI-E 4x2 slot open, that would downgrade to 3.0x2, i expect this will hamper the throughput but i suspect it will be fast enough still. The NAS has a 4.0x4, and 3x4 should be enough for both ports it seems.

  2. you have to look for cards with photos of the “Yottomark” sticker to prove it’s original intel

  3. i will absolutely need a better performing pool to max the net, but that will come in time because I’m a madman and i want to experiment with ISCSI booting a windows machine or exposing two VMWare hosts to an iscsi block and holding VMs there instead of within local storage. speed will be important for these but especially the latter

  4. if i use the Intel XXV710 cards, i should be able to get away with non-first party SFPs, and i want to think about what i may expect my net to look like (with a fancy switch from ubiquiti or qnap to handle things) in a few years to avoid buying something i have to throw away.

  5. I’d like to use Fiber because i see direct attached copper discouraged so often on truenas posts I’ve perused, and it looks like it’d be OM4 LC\LC

  6. you want to avoid link aggregation unless you really know what you’re doing and what you’re in for, i am not a genius, especially with networking thus my interest in 25gbps over using both ports of a 10G adapter

So now we’re onto what i’m not sure of. i’ve tried to use a PFSense box in the past with an old Cisco 2960 and i couldnt figure out how to make a working vlan trunk, but i was also fairly green, not that i’ve been practicing cisco cli… But i only have so much patience for tinkering. i like the idea of messing with stuff to make it cooler/faster/more useful maybe once a week for a few hours, but i dont come home from an IT job to work another at home.

I can entertain a sort of Semi-complex situation for my lab but i’m not going crazy anytime soon. would a PFSense box like this with another XXV710 be the way to go here? Does PFSense even support this NIC? Should i just skip the router and directly attach NAS → Desktop with static IPs for simplicity? will My limited PCIE lanes cause me major issues with this I don’t foresee other then just limiting the speed headroom?

Mostly, I’m looking to see if the more homelabbity people out there know of pitfalls i may fall into before i jump headlong into a 5/600 dollar ordeal for the fiber alone plus a PFSense/OpenSense/something else box.

Who is discouraging DAC cables?

If you are connecting within the same rack you should absolutely use DAC cables.

2 Likes

I have seen it around the TrueNas forums, the 3rd comment here, where they site the fickleness of interoperability and the notion that DAC is only really as efficient as using fiber over pretty short distances.

i didnt think to mention it in the OP but my desktop would require a 100ft cable from where the NAS/PFSense box would go. between those two a weency little 5ft would probably be as long as i’d ever need.

Interoperability gets cited again here, and i cant gurantee that i’m ganna always get intel or melanox or chelsio cards of all the same brand in the future. It sounds like using fiber with an SFP+ transceiver the card on either end likes is the best way to ensure i can mix’n match.

yes, but it costs a fraction of two transceivers and a fiber cable…

The DAC/Transceiver lock in is on the switch side, not the cards, and unless you buy a fake intel or a broken card they usually just work, with DACs or transceivers…

3 Likes

DAC cables, as @MadMatt said, are a fraction of the cost of fiber and more robust and durable. You should be using DAC as much as possible and only fiber when you are leaving the rack.

These guys saying to go straight for fiber are idiots. Why would you spend the money for fiber when a $10 DAC cable will get the job done.

1 Like

If you buy one of the transceivers with expensive stickers (Cisco, Aruba/HPE, etc.), then yes.
Plenty of non-idiotic priced options, especially for home-lab.

i found some from fs.com for about 40 bucks that can do 10 or 25gbps

1 Like

I have a few FS #111922 (SFP+) and one pair of FS #97267 (SFP28) in my setup. No issues so far and my Mikrotik-things will complain to me when those transceivers get too warm (transceivers talking to their “host” is enabled by a feature called “DOM”).

A friend of mine has a few of FS’s DAC in use, for short runs, that cuts the cost in half (for passive DAC, that is).

based on above comments from ucav117 and madmatt it sounds like i’ll use a passive DAC for the run between the NAS and the PFSense box to save some money there, but i still have to bite it on the cable to my desktop, just way too long for a dac.

2 Likes

That is a much more sensible option.

I will say this, I much prefer using fiber over DAC’s just because I find fiber to be much cooler. I think its just neat, and I like looking at it knowing there is light running through it

1 Like

For @FunnyPossum

7 Likes

DAC (over short distances up to 2 meters) make sense for a home lab as they are:
-Cheaper
-Rugged (I have demon cats so this helps)

Fiber
-Costs more
-Not susceptible to EMI
-Lower latency
-Lightweight and flexible (to a point)
-Long distance

For my home lab I went with DAC because of cost and I found a 10GB QNAP switch for a couple of hundred. I paid $13 for a 2 meter DAC where to do the same connection with fiber would have been 2 x $46 for the SFP+ plus $36 for the fiber or $128 total. My cats would have chewed up a fiber cable in a couple of days.

If you are in an area with lots of EMI interference then fiber may make more sense, but unless your house is directly under a cell tower, that’s unlikely.

Fiber makes sense (in home lab) for runs between buildings (protected in conduit) then use DAC for point to point connections. At work, we use fiber exclusively, but I don’t pay those bills. :stuck_out_tongue:

Technically DACs have slightly lower latency, but it matters more for high frequency trading where every nanosecond counts. Not so much in your home lab.

True, but I did say up to 2M DAC. You’ll note the latency goes through the floor with anything longer and I recommended fiber for longer runs. Right tool for the job, but home labs typically have restricted budgets. Cheers.

Why are you even arguing about it? DAC for anything that’s a distance within DAC range (haven’t seen anything longer than 3m), fiber for everything else and more distant or for media conversion purposes (e.g. SFP → RJ45).

Unless your Rack is built out of railway-grade magnets.

The truenas forums can be pretty disappointing at times. I always preferred the STH forums or here for server specific stuff.

3 Likes

update: i’ve ordered the fiber gear (and 1 intel DAC because cheaper) but i’ve also decided that instead of using giant piles of mirrored pair HDDs for speed i’m ganna get some enterprise grade flash like an intel P5600 or simmilar U.2 SSD. i’ll probably use 6 in 3 mirror pairs, using an X16 card for the first 4 SSDs and the NVME SSD ReDriver thing for the other two.

guess i’m going threadripper for the lanes and the PCIE 4. i’ve seen some 3960X’s on ebay for about 750 and some 3945’s for like 110 and others for 600. the super cheap ones make me sus but man would it be neat to get onto TR without having to spend a grand on chip and mobo alone.

1 Like

MMK, so i kinda did a stupid and ordered 6 WD Ultrastar SN640s (6.4TB) i saw for a decent price. now i find myself looking for cards that will work with them.

first i was looking at the LSI 9400-16I, but these appear to be designed per the manual to connect to a backplane with enabler cards not directly to the drives.

I found a link to a supermicro add on card but it wouldve required bifurcation. i found the ZOC-SLG3-8E2P on a servethehome forum post and it looks like it should work fine along with these occulink to U.2 & power cables.

Has anyone else used these in Truenas with U.2? that board is about 450 bucks where i can find it, rather not spend even more on that card if it wont work.

suggestions outside of what i’ve found are also welcome of course.

You can connect that directly to drives if you want. An expander will let you connect more, but nothing stopping you from direct connection. I cant remember if I have a 9400 or 9500, but it is a 24i version and I directly connect to 24 HDDs with it.

edit: the brochure for that card says you can direct connect NVMEs in a 4x lane or 2x lane configuration:
https://docs.broadcom.com/doc/BC00-0459EN
So if you got cables that did 2x mode you could directly connect 8 drives to that HBA.
It appears you will want SFF-8643 to dual SFF-8639 cables, but im not positive on how going to 2x mode works

1 Like