It's time for my company to replace our aging Dell 2970 6-core Opteron SQL server. This thing is ancient, and we are now maxing out disk IO with our daily SQL db use. I am sure the dba needs to clean up the database a bit, but this thing needs to be replaced anyways.
I have been looking into new Dell servers, as they are still fairly reliable, and the support and service is unmatched. There will probably be two identical servers running ESXi, so we can move the VM around in case of a failure. (and because SQL licensing costs more than the hardware) Also might run a few other VMs.
My boss has hinted that he wants to use the DAS cache that Dell has been pushing a lot.
What features and hardware should I be looking at for the best bang for buck on these machines? What CPU would be best suited for this kind of use?
If a machine is running SQL Server exclusively you are better off with fewer faster cores than lot's of slower cores.
The Xeon E5-2643v3 (hex-core) and E5-2637v3 (quad core) are probably the best if you want to keep SQL server licence costs down whilst maximising CPU performance.
Running SQL Server 2014 in a VM under either VMware ESXi (or Hyper-V) is not an issue. Just remember that if you are on a per core licencing model you will still have to purchase a minimum of 2 x 2-core licence packs. You will also need Software Assurance if you want to be able to migrate your workload from one host to another as failover rights are no longer included in the licence.
As far as designing for failover you could do a standalone VM or you could build a guest cluster of 2 VM's. This will let you failover the Database independent of the host server so protects against problems within the VM or to allow for OS/SQL server patching. Obviously this is a bit more complicated to deploy as you need to create shared storage or use Availbility Groups or Database Mirroring.
Even if you are using a SAN you can still benefit from SSD's or Fusion IO in the host servers by using them for TempDB and Buffer Pool Extensions (assuming you can't just give SQL server loads of RAM).
getting your database onto flash, or at least having some hybrid storage solution in case you have a really cold dataset which makes your transaction logs / syncs go to flash, is usually the first thing people do in terms of increasing database performance.
also, consider enabling compression if you have the option, it's usually a worthy tradeoff for most database workloads.
I would strongly recommend against that for highly transactional OLTP workloads especially if you are using fast flash-drives; it is amazing how quickly the CPU becomes the bottleneck.
Compression is great if you have slow disk and a shortage of RAM - it is faster to read the data-pages from disk and they stay compressed in the Buffer Pool, but on systems with low-latency storage and lots of RAM it's not worth the CPU cycles to compress/decompress the pages. There can also be other sude effects as Locks taking longer to be released.
I guess each case is different and might still depend on the workload and the shape of the "boxes" or VMs you're running, we were running with 450GB of tables on local nvme flash, 16cores, 32GB, it was mostly web workload + some analytics on replicas with a relatively static data - about 5% data churn per day, above 90% of the read traffic in our case are point lookups.
I guess getting a definitive answer would require some A-B testing, as each case is different.
It's nothing super crazy. About 400 local users hitting a db through several in house websites. Probably don't plan on running compression, as the jump to a new server should be massive difference in the first place.
again, it would depend on the websites.. oddly you're not using mysql for the website stuff, I'd have thought that website devs who prefer mysql would have been easier to find. and it would allow you to not have to bother with the licensing mess.
All the devs are Microsoft guys that are building upon tools that were developed about 7 or 8 years ago. They really love the MS Query Analyzer, and refuse to look at any comparable options for other databases. The only developer that had done anything with opensource resources left the company just the other week.
Trust me, if it were up to me it would be all Linux/BSD and mysql or postgres. Licensing makes everything a mess, and they are constantly making it worse every release.
Postgres is where I would go if looking for a solid open source RDBMS with advanced querying capabilities. MySQL always felt abit 'hacked' together to me, buut I haven't gone near it in a long, long time. I expect MariaDB is pretty solid by now.
I really like SQL Server but Microsoft should offer it on Linux (even if only support on a few distros e.g. Redhat or SuSE) and ideally should open source at least the SQL Server Express Edition.
Given how Microsoft needs to throw their weight behind OpenSource to get the Azure usage to their desired levels I wouldn't rule out either happening within the next 5 years as SQL Server really is a good product with a loyal fanbase.
The current licencing isn't so bad for a greenfield project, it's just a pain they keep taking features from the product licence and placing them into Software Assurance so you have to buy that if you want to failover workloads etc. But if you have an environment that has lots of versions/editions deployed it is a total f***ing nightmare when you come to true-up on usage.