Cisco UCS C240 recommendations or suggestions?

Looking to up my homelab game and have stumbled across a gorgeous rack mount server, but really know very little about these machines, compared to the Dell and HP stuffs that are seen everywhere. Can anyone suggest common pitfalls owning these devices in a small / home office environment?

Intended uses will be (for now) a single node compute host, probably using ESXi, but also seriously considering Proxmox + Docker as the “Holy Grail” of homelabbing. So much to learn. I understand that drive caddies will be twice as expensive as normal and there may be some issues acquiring firmware, but i really have no idea. Which is why i’m here to ask the experts what their opinion. Thanks, in advance. Any input welcome and appreciated.

Specs are as follows on the 24 bay, 2.5" form factor rack mount Cisco server:
After learning and doing as much as i possibly could between my daily driver and aging hardware, finally stumbled across some Craigslist treasure and i’m now the proud owner of a Cisco UCS-C240-M3S as my first used piece of enterprise gear towards my (hopefully) budding homelab.

UCS C240 M3S specs (updated):
(2x) Intel® Xeon CPU E5-2665 @ 2.40GHz
64GB (8 x8GB) DDR3 1600MHz Registered ECC
(2x) 650W redundant PSU
(4x) Intel Onboard 1Gbps Ethernet
(3x) nVidia Quadro 2000
LSI 9266-8i MegaRAID SAS HBA
(3x) 600GB Seagate 10K.5 SAS HDD

I’d go with something more mainstream. Not sure Proxmox has been tested on that hardware, ESXI certainly has but these are much more enterprise than small business or even medium size business. My guess is that you can’t get replacement parts or firmware upgrades without an active service agreement either and you won’t find as much on home mods and compatibility.

Unless you can get it at an amazing price I’d stay away.

I once went through the ordering process from Cisco and had to read a 100 page manual on compatibility and wasn’t offered suggested or typical configurations.

I’d say these are probably convenient in any Cisco heavy environment where you can pop optical transceivers or twinax stuff between switches and servers without worrying about compatibility.

Well, I think I got a pretty good deal. Came with three old Quadro 2000 GPUs and LSI MEGARAID 9266CV-8i yet I know very little about it. Need to get a couple ssd for it. And firmware. Ugh. Just walked in the door after 150 mile trek. $425 felt very fair to me.

For that hardware, I’d recommend ESXi. Stay away from doing anything too ‘creative’. This isn’t a system I’d recommend for GPU passthru or anything like that. Leverage it’s ‘enterpriseness’ and use it to do a bunch of smaller VMs. You could VM all the systems you need to do Windows MCSE labs with that box. ESXi also lets you create some virtual networks as well, you could even simulate your MCSE lap across 2 different ‘branch offices’ with it. Lots of options. But 64GB isn’t a lot of memory.

All that being said, Learning Hyper-V is also an option. Proxmox is cool… but not I don’t consider it enterprise grade yet.

ESXi it is. Except I’m struggling to get firmware upgraded. Hoping to dive into that today. Will be working on acquiring more memory, but your tips and suggestions are incredibly helpful to get me started. Any thoughts on how to use these three old Quadro 2000 to gpus? Looking into Monero mining as a hobby. My power consumption only costs .03 kWh so I want to go nuts.

Had hoped to spin up a remote Windows 10 desktop environment for work, testing, gaming… Can you elaborate on why this isn’t great for hardware pass through? Appreciate you schooling me.

That’s weird. It’s kinda straightforward, you download HUU image, boot from it, and it updates everything it finds.
Edit: but you may get into trouble if you’re using non-Cisco-branded disks with their RAID controllers. As in, disk firmware will be different and the controller will see it as “incompatible” and refuse to update it.

I can’t download the HUU iso, sadly. Need login. Not sure how to pay for one. Is Cisco similar to vmug? I’m so lost. Hope to look at it this weekend. Lots to learn.

I was considering attempting to throw a couple consumer grade ssds (850 evo maybe?) to get started learning, but for now I’ll be content with these spinning rust platters. I knew this was outside my wheelhouse, but I won’t learn and grow only working on PowerEdge systems. So far it’s been more exciting than frustrating.

Thanks for such a wonderful community to share input and ideas. You all are nice!!

Raid controller might have issues with it so try before you buy a stack :slight_smile:

Good trek! $425 sound like a great price, it will be power hungry though once you put it under load.

Cisco UCS stuff requires the switch (Fabric interconnect) It also manager hardware profiles.
Price might be right but its hard to use it in a stand alone setup.

It doesn’t.

There’s definitely a bit of a learning curve here, but so far i’m absolutely loving Cisco’s Integrated Management Controller. I assume this is their version of IPMI or iDRAC or whichever. Seriously struggling to get firmware properly updated as there was a fairly well documented bug when attempting to boot from USB device and upgrade the firmware locally: https://quickview.cloudapps.cisco.com/quickview/bug/CSCup62091

Have been attempting to work around those issues via KVM Console in the CIMC, but i’m caught in this odd loop where HUU never properly finishes. I’m logged in “remotely” across the LAN via 1Gbps connection, so i don’t imagine there would be any bandwidth issues. Doesn’t appear to be any real issues to using it as a stand-alone compute host. Would love to grow this into a full Cisco stack, but trying to stay focused on the C240 M3S as time permits.

As far as non-Cisco branded disk drives (or SSDs) i would anticipate not having the ability to get any reporting or statistics on drive health in CIMC, but will consider my options. Thank you for the heads up! Need to invest in a bunch more drive caddies, only have three to get started and a bunch of blanks.

Probably should just reset everything to factory defaults and start all over. Previous owner was running Windows 10 on the RAID array, so i was able to quickly stress test the machine mining Bitcoin for an hour. Sadly, it was only pushing ~200kH/sec using all 32 CPU cores + all 3 GPUs.

There may be a chance they won’t work at all if you put them into RAID. I literally had this kind of issue when customer somehow managed to replace one drive in the active RAID with an identical model but (I suspect, but couldn’t get it out of him) not from Cisco: it was the same exact model, but different firmware, the RAID didn’t rebuild itself, LSI WebBIOS (or whatever they use today) hanged on launch, StorCLI reported the drive as Unsupported-Good and failed to update the microcode.

Aaand we’re live! Thanks to everyone who has mustered up the effort to chime in on my wannabe homelabbing attempts. Just installed ESXi for the first time and created a datastore. Baby steps. What a rollercoaster of emotion attempting to upgrade firmware. I don’t have a ton of experience with iDRAC and IMPI, but i’m seriously digging CIMC. KVM Console requiring Java is a real bummer, but i spun up a disposable Win7 VM for this nonsense.

Welcome any words of wisdom for someone just starting down this path, any and all guidance is appreciated. Any required reading? Any common pitfalls? Hope to eventually toy with passing through the Quadro. Need to figure out drive caddy situation and learn about storage tiering, perhaps. Is that viable with free ESXi if i were to pick up a couple of SSDs? A friend has been running some 850 EVOs in his R720, but as per the expert advice here, i’ll be reconsidering that and attempting to learn a little about Enterprise SSDs.

Any idea if Intel 320 120gb SSDs will play nice with my C240-M3? I poured through the spec sheet, but it only had the Samsung 1625, Intel 3510 and Samsung PM863 drives listed. Plus some 12 Gbps SAS drives, but i figured those were outside of my budgetary constraints.

6 Gbps Drives
UCS-SD800G0KS2-EP 800 GB 2.5 inch Enterprise Performance SAS SSD (Samsung 1625) SAS 800 GB
UCS-SD400G0KS2-EP 400 GB 2.5 inch Enterprise Performance SAS SSD (Samsung 1625) SAS 400 GB
UCS-SD200G0KS2-EP 200 GB 2.5 inch Enterprise Performance SAS SSD (Samsung 1625) SAS 200 GB
UCS-SD480GBKS4-EV 480 GB 2.5 inch Enterprise Value 6G SATA SSD (Intel 3510) SATA 480 GB
UCS-SD120GBKS4-EV 120 GB 2.5 inch Enterprise Value 6G SATA SSD (Intel 3510) SATA 120 GB
UCS-SD960GBKS4-EV 960 GB 2.5 inch Enterprise Value 6G SATA SSD (Samsung PM863) SATA 960 GB
UCS-SD400G0KA2-G 400 GB SATA 2.5 inch Enterprise Value SSD SATA 400 GB
UCS-SD100G0KA2-G 100 GB SATA 2.5 inch Enterprise Value SSD SATA 100 GB

Never seen them inside one, so can’t really say.

The Cisco IronPorts were rebranded Dell R710s.

That’s possibly a rebranded Dell R720. It’s definitely comparable in specs. $425 is a great price for that.

As far as SSDs go, I bought 4 of these recently and they had very low usage when tested (under 6TB of writes total and uptime of around 2 years on each of them). The SM843Ts were released in 2014 and are the SM863’s younger brother. They’re 960GB for $230 each. There’s still 2 left and I think they were all pulled from the same server. Less writes to these drives than to the 840 EVO I’ve been running in my daily driver since 2015 and they’re rated for 2 Petabytes!-

Darn, sold. Thank you!!! This is the type of advice I’m in desperate need of. You guys are all so very knowledgeable around here, I’m super grateful for the help and brain drain…

I guess I could maybe broaden my search parameters a bit. Need some drives aswell as the caddies they go in, but don’t wanna spend 30$ ea.

So much to learn. Just began the process of trying to learn my storage options. All in one host with virtualized FreeNAS sounds real slick.