HELP! choosing (budget Enterprise?) SSDs for Cisco UCS / ESXi 6.5 host

I recently scored some Craigslist treasure and i would love to hear some input and recommendations regarding what the L1T community would suggest for my VMware primary datastore for hosting various VMs. This is an older Cisco C240-M3 and i’m intending it to become an all-in-one compute host. I’ve read through this guide, which was a helpful starting point: https://www.servethehome.com/buyers-guide-datacenter-ssd-inexpensively/

I have a friend running four 850 EVOs in RAID10 with his R720xd, but i’m not convinced i want to go the consumer route. I’d be open to hearing any feedback, suggestions or pro-tips. My UCS machine has the LSI 9266-8i controller with 24 available 2.5" bays. At the very least i’d liek to have a pair of ~400GB RAID1 to complement the currently running 600GB Seagate Saivio 10K.5 (x3) in a single RAID0 datastore for initial testing purposes.

Intel, Samsung? New? Used? Let’s discuss. . .

Enterprise grade drives typically are much more expensive, and only have 3 real differences, firstly higher permitted normal operating temperature (for Kingston 55c vs 40c) which unless your putting it in a crawlspace or wardrobe without ventilation, doesn’t make a difference to anyone without a data center. And the second thing is a higher Unrecoverable Bit Read Error (UBRE) thats normally stated as around ten times higher, (for Kingston at 1.11 Petabytes vs only 0.11 Petabytes), which on paper (in many real world tests consumer drives manage much more similarly than this) matters enormously if you plan to use them in an array, but not so much as JBOD. Finally they have a higher rated Mean Time Before Failure (in Kingston’s case 2x, at 2 million hours).

All in all determine what up time is worth to you… because if your making regular backups (and you should be) for most home lab uses the consumer drives are totally fine, but for a business application where drive failure results in service unavailability that hurts productivity or reputation its rarely worth even considering.

2 Likes

I agree with @SheepInACart. With good backups and in RAID1, I wouldn’t spend the extra money on the enterprise drives in a home lab or even for most home use. I’ve even used good quality consumer SSDs in small business applications where SSD IO was needed, backups were solid, and RAID 1 or 10 was in use.

As long as you don’t need something current gen, you can find deals on unopened/unused older enterprise SSDs on eBay. You will be getting drives that are 4-5 years old though. My guess is that they’re cold spares that were never used.

Just make sure to dig up some old reviews before making the purchase to make sure they’re good drives.

Thanks for all the great input! I’m trying to learn as much as possible about all the enterprise features I’ve missed out on through my career and am willing to invest a bit more for a pair of solid SSDs to run my compute host from, or with. I’m sure I’ll eventually end up with a mix of various devices, but for now I’m trying to brush up on Enterprise options.

Have a pair of 960 EVOs running my daily driver and they have been absolutely fantastic… Trying to branch out a bit. Really need three to play nice with ESXi and my LSI 9266-8i in the Cisco UCS box.

Will keep checking Fleabay for 400GB Intel’s, or what not.

This plan is ideal, as I want to be able to toy with the drive integration into my Cisco UCS Integrated Management Controller. This will surely cost as much (probably more) than I have invested in the entire server.

Could anyone chime in with their experiences on these Intel 545s: https://www.newegg.com/Product/Product.aspx?Item=N82E16820167429

Performance

  • Max Sequential Read: Up to 550 MBps
  • Max Sequential Write: Up to 500 MBps
  • 4KB Random Read: Up to 75,000 IOPS
  • 4KB Random Write: Up to 90,000 IOPS

Features

  • 64-Layer TLC 3D NAND
  • AES - 256 bit encryption
  • Intel quality and reliability
  • Low power and space conscious form factors

I’ve never used these drives personally, but I’m sure they will be fine for your purposes.

For what it’s worth, I tried using a Sandisk sata ssd as an esxi datastore and it was exceptionally slow, while the Intel 600p it was booting from was normal. And by slow, I mean like 1 MB/sec. I reformatted and repurposed and speed tested it on other platforms, and it’s back up to 400-500. Maybe it’ll work if I give it another go with esxi, but I’m not going to waste any time trying. Possibly a fluke, but worth noting.

1 Like

THANK YOU! This is the type experience i find valuable. Wanting to at least start with a couple Enterprise class solid state drives for the ability to use integrated management features is probably going to be a foolish expenditure, but it’s worth slight additional cost to gain that knowledge and experience. Eventually i may end up settling on Samsung Pros, but for my initial RAID1 solid state array i want to proceed down the Enterprise path.

For my use case (at least initially), i could probably get away with this old 64GB Crucial C300 with 88% life remaining on it. I can justify these costs as continuing education.

1 Like

Sometimes with older controllers or drivers, ESXi will not recognize SSDs as SSDs. That wouldn’t explain the horrible I/O necessarily, but over time not running TRIM or not over-provisioning these drives could cause issues.

1 Like

That’s another aspect i’m lacking serious knowledge in – RAID controllers. I know i have the LSI 9266-8i that’ll support up to 12Gbps SAS drives. Trying to do the dance of ensuring whatever hardware i end up choosing will be properly supported by: Cisco, RAID controller as well as ESXi has been a bit daunting. Research and learning is half the fun!

This community is fantastic. Thanks to all who have (and continue) to contribute.

1 Like

Don’t worry about the RAID controller too much. LSI is always a sure bet for enterprise applications, especially since you aren’t trying to use an ancient model. VMware has official support (it might even be baked in, no need to install a VIB). I would recommend playing around with different levels of RAID if you have the gear, possibly even setting up a RAID 1 or 10 with SSDs and another RAID with HDDs. Configs can get rather interesting. Just don’t have your datastore on mixed SSD/HDD RAID. Kills the advantages of the SSDs Best practice is a datastore on flash and datastore on HDDs, with the OS virtual disk on the flash and any less IO intensive stuff on another virtual disk on the HDDs. I would also recommend setting up a free account with VMware and demoing their advanced stuff like vSAN and vMotion. Nested virtualization is your friend in the lab. I haven’t worked with vSAN that much, but from what I remember you don’t want to use RAID with that. It will want as close to bare metal exposure to your drives as it can get. Can also flash your LSI controller into what’s called IT mode for vSAN or Software Defined Storage in general. I don’t know off hand if that model supports IT mode, but a quick check on the website will tell you that.

Bottom line, just mess around with your gear, FUBAR some RAIDs, and have fun learning!

Apparently this controller is LSI’s SAS2208 version, but only has PCIe Gen2 – or so i read online. Currently booting ESXi 6.5 (free) from USB, using the 600GB (x3) Seagate 10K.5 drives in a basic RAID0 for initial testing. Running well, i spun up a bunch of test VMs and have been learning tons.

Now i’m ready to step things up a notch, was hoping to find a pair of ~400GB SSDs to run my VMs from in a RAID1 array and use the spinning rust disks for other storage. I already have acquired three 3.5" 8TB WD Reds over the past ~six months as i found sub-200$ sales, so eventually will probably look into a dedicated NAS box for media storage. Currently i have an 8TB mirrored Storage Space running on my Windows 10 Pro daily driver, with single 8TB external USB for backups. Sadly, i don’t believe i can present a ReFS formatted Storage Space to ESXi. I boxed myself in with poor planning, but HAD to have this Craigslist treasure and i don’t regret it a bit.

EDIT: How about a couple of these, what you guys think? – https://www.amazon.com/Intel-480GB-SATA3-Solid-SSDSC2BB480G701/dp/B01KEEM144/

1 Like

Don’t worry about this. It’s plenty of bandwidth for whatever you can throw at the controller on a SATA or even SAS interface.

I’m not sure if this is possible either, especially on Win Pro. Could play around with iSCSI if Pro supports it or NFS mount point on an ReFS volume. But these might be Server features. Also don’t forget to play around with hyper-v on Win10 if you want.

most likely a late response,

Background:
I run SaaS service (search engine) where you get a lot of reads/writes when lucene indexes rebuild very often.
We run 14 host servers serving around 50k/rpm (requests per minute) at any given time.
(each server has RAID 1), all SSD’s get same writes at the same time. (within last 4 year that service chewed through like 100-200 SSD’s - there are other servers in the saas platform like utility servers, recommendation engines… etc.)

LifeTime Wise

  1. Intel SSD DC S3700 - This jack lasts longest. Around 1year (longest life)
  2. Samsung PM863 - This one lasts around 9 months. (decent)
  3. Kingston DC400 - Around 6 months (not worth…)
  4. Sandisk CloudSpeed - Around 5-6months (stay away…)

Speed Wise / Performance

  1. Samsung PM863
  2. Intel SSD S3700

Note: Big Search Engine companies prefer Intel’s SSD S3700, from whats public even baidu uses it.

2 Likes

Thankfully, there are workarounds. I have not played with Win 10 much at all, and you probably won’t find much documentation as most storage groups are still cranking out certifications using Windows 2012r2 (8.1 equivalent). What you can do, though, is create a virtual disk inside your storage pool, and present that to VMware. A bit of extra work, and I don’t recommend it for actual use, but if you’re just testing ESXi, or learning how it works, it would be a great solution.

Here’s an example guide on how to do that with 2012:

EDIT:
This might also be helpful:
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-4ED3304A-ED4F-4692-825F-83637E04D592.html

1 Like

This is incredible advice. Started with VirtualBox, but found that to be rather clunky. Discovering Hyper-V was already built in to my daily driver was a huge boon. Upgraded RAM in my workstation, have used that to learn and grow over the past 6+ months as I continued to research and plan my wannabe homelab.

Only when I discovered a 32 core Cisco server on CL did I jump into this. The price was right.

1 Like

Thank you for the incredibly detailed first hand experiences! This is super helpful, as I’m looking for an ultra durable ssd to be the center of my single node compute host.

Ended up grabbing a couple of these trays: http://www.ebay.com/itm/122630296203

… to go with the 800GB HGST 12Gbps SAS SSDs i found on fleabay. Hoping that’s more than sufficient to get me started. We’ll find out, possibly as soon as next week.