Quickie howto guide:
For mine the config necessary was:
1. Hook up Cisco Console cable (9600-8-n-1, no flow control of any kind)
2. Login admin / no password [ if you have a password, there is a good chance you can reset it by interrupting the boot process with the serial cable attached.
3. ```enable``` command.
4. ```config``` command
5. ``` interface e 1 ```
6. ``` no shutdown ```
repeat steps 4/5 for all interfaces. you can do ``` show running-config ``` to get an idea of your current config.
Once you're satisfied everything is working properly you can do ``` copy running-config startup-config ``` and the settings you've changed should survive a reboot.
Be sure to lock it down, set a password, etc. If you elect to use the management interface, you can set an IP that makes sense for your environment.
I believe UBNT has a similar priced switch, with a mix of RJ45 and SFP+? I debated the extra outlay vs the MikroTik, since I have other UBNT hardware and like the management aspect.
But RJ45 locks me to a specific media, where as SFP+ maybe more expensive, lets me run short range fiber, even shorter range cat6/6a (rj45), or 80km of singlemode if I wanted to get crazy, all from the same format port.
Copper is the way to go for any DIY structured wiring projects. At home I’ve pulled pre-terminated fiber jumpers through conduit / walls, it isn’t as easy as I thought it would be, but it did work, and now I’m good to 100gig muhahah.
At work I’d bring in a crew to pull singlemode for me and have them fusion on the ends.
That’s the thing. It’s loud AF, 48 port and probably a power chugging monster. I myself am also looking at the new Mikrotik 8 port SFP+. It’s passive and new.
One question though. Is the latency on these old switches really that bad compared to new stuff?
I use my freenas iSCSI shares for almost everything, my Steam library, programs, work space (blender, 3D printing stuff, UE4, etc.). Would there be a noticeable difference?
The sfp+ is better latency than 10gbase-t, see this
The enterprise people care about this because they often have several layers of networking to cross the DC, multiplying the latency per link.
Even with several usecs of latency, the software layers (card -> interrupt -> kernel -> userspace) will dominate, unless you have some RDMA HPC codes – these usually busy poll, sucking the cpu until the data arrives – cpu is the price you pay for low latency.
322W typical if you have it completely populated and are running with all ports active. It’s not going to come near that if you’re only using a few ports. You can also run it of a single PS if you don’t need HA and make it more efficient. TCO with your power calculations is way off.
When you have a homelab where noise and heat are a concern and you only ever run half of the ports at full speed at most, mikrotik wins, if I were running something that actually needed the full 48 ports and could throw the thing in a air conditioned closet, sure Arista would be of great value.
As pointed out before, the power consumption will also cost you, even if the system would run half of what its quoted at (301W typical, 395W max), so lets say 200W, you are still loking at 2/3 of what Kuro68k quoted earlier, meaning that its still rather expensive thing to run if you don’t actually use all the ports. Where as max power consumption of the 16 port system is 44 W max, which is a fifth of what the Arisa would more than likely draw or roughly tenth of what arisa will draw fully loaded.
But as we all know every tool is a hammer if you are brave enough.
I run a home lab as well. You can put the gear in a closet or small room where you won’t hear it. Isolating it to a small room also help with dust and cooling. I’m not a fan of Mikrotik (GPL/Source violations) and their features are pretty janky compared to the enterprise level features you get with this switch.
I don’t think your power draw requirements are close to being correct, but I don’t have the switch yet to measure what it draws running at idle with no active ports/etc.
I tried to play the passive cooling / low noise / quite / low voltage game for quite a long time. If you really want to do stuff, it gets prohibitive. I’d recommend biting the bullet and setting up a space where you can use gear like this. It’ll also allow you to run surplus enterprise servers.
I’ve had my eye on these, I need 40GbE ports big time, but I also need sfp+ ports for 10GbE too and it would be nice if it was in the same unit. Noise is a big concern for me though, so I’m probably not getting anything for a while.