Need help picking out disk shelves for SSD

Hi there,

I’m a noobie when it comes to enterprise gears and would like to know what kind of disk shelves I can get that supports sata SSD. I was thinking of using Samsung 860 Pro drives. About 10 of them in raid 10. I currently have 2 x Dell R720s. Planning to add more server for redundancy. I’m kind of lost when it comes to picking out a controller that connects to a disk shelves. The purpose is to host a storage server and run xen server that will run virtual machines on the raid array. I heard hard drives are a bit slower when it comes to this, that is the reason for the SSDs. Hope someone can help me figure this out. Thank you in advance!

Also, please let me know if this topic is in the wrong category and which one it should go into.

Ok, lots to talk about. Step one:
How much throughput do you actually expect you need and What Network connection are you using?
Here is a Raid Performance Calculator: https://wintelguy.com/raidperf2.pl

860 Pro’s will get aroun 500MB/s, most 7200RPM Drives barely half of that.
But once you put 5 of them in Raid 0, in two Groups in Raid 1, you are exceeding Gigbit Ethernet with both of them. Heck, you are close to saturating 10G with spinning drives in that configuration.
You’ll get an improvement in Random IOPS with SSD’s, but i’d highly recommend going with 8 Spinning drives and putting 2 SSD in Raid 0 as a cache in front of that. Saves money, NET’s you a heck of a lot more storage and the difference in performance to a pure SSD Raid will be close to 0 in most cases.

Controller and such highly depends on how you are planing to run all this. Will the drives physically go into the XEN Server, or are you planing to run a sepearete machine for Storage?

Also, is this a homelab kind of thing, or are you planning for an actual business production environment?

https://www.45drives.com/products/stornado/

1 Like

Is it not recommended to go with Netapp disk shelves with a LSI controller in HBA mode?

thanks domsch1988 for the reply.
So I’m trying to do have about 6 or 7 virtual machine running on the dell r720(nodes) which I have 2 of them for and the other dell r720 will be hosting freenas for storage. I will be attaching 10gigabit nics on them(mellanox connectx-3 or something similar) One of the virtual machine will be hosting a large sql server along with other software, as for the other VMs, they will be running a some software, but will not be as intensive as the VM that is running sql server. As for the throughput, I’m just trying to make it so I don’t go anywhere close to the max throughput, but leaving a lot of overhead.

Would it be better if I just get another dell r720 and run that with drives and freenas on it, instead of buying a disk shelf and trying to connect it to a external controller?

This is for my work at the office. We’re moving them from a consumer level hardware to dell r720 because the amount of ram r720 can handle

Ok, SQL Server Performance is a b*tch. I’m having constant performance Problems at work with the SQL Server holding our Production Database. And Determining the Bottleneck is really hard. No threw CPU, RAM and Fast storage at the DB without much effect.

None the less, for a Database that needs to be quick and serve many clients, SSD storage is basically a must.

This depends highly on what they are and are doing. Most of our VM’s are running on Spinning Rust Raids without Problems. Both VMWare and Nutanix are running on higher capacity Storages with Spinning Disks and we aren’t facing any Problems with that.

I still think, that for you usecase, you should thing about “tierd storage”. A Fast, smaller, SSD Storage for SQL and Stuff a lot of Users use Simultanrously, and a Large Storage with Regular Drives for VM Storage and your NAS. Making an all SSD Nas gets expensive fast and isn’t really necessary unless you are using it to Feed 8 or 10 Editing Workstations with 8k Footage directly from the NAS.

Also, Linus did a recent Video on a “DIY” Jellyfish replacement:


Basically an all SSD Storage. They also did some throughput tests on their machine, to give you an idea what to expect.
For such a machine your quickly looking at 20-30 Grand.

Yes. SQL is one of those things that if you figure out how to optimize it to not chug shit through a straw, you’ve become the Database Messiah. Honestly, I’m really wanting to see how Optane DIMMs will handle large SQL Databases

So if I go with ssd drive, then I don’t need a caching drive right? like intel optane 900p