Took delivery 2 days ago from Dell.ca.
The interesting thing about this purchase was the default build starts you off on 12G platform.
I got all messed up over E52637 V3 availability ???
So some unique requirements not really related to hardware. More driven by the new M$ licensing on Windows 2012 Server and SQL 2014. Both of which ultimately save money for SMB compared to last gen.
Windows 2012 STD OEM grants Host GUI plus two VM Server license for $800 dollar / 2 socket / 2VM is kind of how the SKU reads.
To me this is much more versatile than the 2008 STD of Host plus 1 VM.
You can buy multiple $800 dollar 2012 SKU's per 2 VM's. In reality this is $800 for 2 2012 VM's $1600 for 4 VM's. Once you push it to 6 VM you may be beyond 2 CPU workload and start looking at 2012 Datacenter. So all good.
Where it gets real tough to make the correct move is introducing MS SQL 2012/2014.
SQL STD is now licensed per core and socket. A 2 socket mother board must have "at least" 4 core licenses and cover the amount of actual CPU cores. Then the core exposure per VM must be covered within the VM itself.
2 core "SQL" is $4000. Bare minimum is 2X$4000 because most mid line servers are dual socket even with only a single CPU installed. This is a big shift in just one generation because last gen SQL was solely socket based. Now it is socket AND core based. Still depending on requirements this can work out well.
BUT... It changes the hardware landscape dramatically. Now you shop for the biggest baddest XEON's available when previous it was the most core possible.
Reason SQL is very well adapted to multicore. especially for a workload like SharePoint but if the core count is not there it can also push CPU speed. So Spending $8000 more for another 'set' of socket/cores or the fastest Xeon 'a plus $1000 option' a person has to try a +$1000 Xeon and see if it works. Cool.
The Dell R530 dose not offer the highest TDP Xeons. This you could figure is a 2U limitation that will likely be removed eventually, so I had to go with the 3.0 GHz part, I think it will be enough. It has nice native boost which is very close to the highest TDP part but lacks some cache (10 vs 15).
The V3 Xeon/13G/2011-3 socket combo as individual things offer little more over the previous 2 generations. And yes there is a fear that in the real world the memory timing of DDR4 could reek havoc in these systems a la P4. But combined together make for a true break from the past, Early benchmark/evaluations say dive in if in the market for new systems. Sometimes maybe it's the little things that can sway a choice. OEM Windows 2012 licensing, USB3, fresh IDRAC. That can sway a decision. These will make a difference in 2017. It is just hard to bust from the past when the Nehelam series R710 racks now kind of stare down at us and say "We took all your fat fingers could do 4 years ago and we will still be responding to those 4 years from now."
Bad mofo's those tripple/tripples. (The workstations didn't hang as well, disk controller speeds and problems mostly).
So a 13G synopsis....
DDR4 probably little to no actual improvement.
Power efficiency, maybe, I'm only in charge of 20 servers, so no difference
2011-3 socket price for new tech vs. old 2011 socket no difference "yet". I started to see some discounts hinted at for 12G. My sales rep seemed to not quite be ready to give away 12G systems after multiple quotes for 12G/13G comparable systems. I've never been in sales so I don't really understand the equation at work here.
PERC embedded same, better interface, 12G (R520) same FRU, I think. 11G was different battery and not flash NVRAM but the 11G was burdened with an expensive BBU (battery backup unit). The newer raid cards or embedded have a Lithium labeled battery which is probably mostly capacitors and flash. Kind of the last gen for this "battery" idea. Flash/capacitor has already reached consumer grade, it just needs years validation on the server side. Heck guys have started to remove batteries from cars, computers can't be far behind.
IDRAC, still java laden, still a hassle but better. Document the IDRAC requirements for use for all your IT buddies, set them up, walk them thru it every week, then call them on the weekend and time them until they are sick of it. NO shortcuts here. They will eventually unplug your IDRAC cables to make room on the switches.
OMSA better quicker response in the RAID config
PERC <ctrl R> bios mode exact same as last 2 generations which is actual kind of nice.
Life Cycle Controller, haven't played with it yet. Monday.
Chassis. Better. 13G 2U and probably 4U is transitioning away from 3.5 inch drive in total. They now provide a 3.5 backwards compatible tray with 2.5 inch adapter. So order a 3.5" 600GB SAS 15K drive. and you get a 2.5" (11mm high) Savio series Toshiba 600GB 10K. These go back a couple generations and block/sector out exactly as the 3.5" Seagates/WD so whatevs. They B fast, They B quiet, 2 me anything labeled Savio means the first and best, I met Savio.
The 3.5" backplane/PERC/expander and chassis accept consumer drives with zero issue or firmware checking and perform off the charts. Slipped a Crucial M4 512GB in a sled and it tested at theory maximums. Never seen that before!!
I'm only a day 1 playing with this new server. Going to do a full blank reinstall via EFI/GPT bootdisk on Monday.
The one thing that strikes me after first day hands on with R530 is Dell's ecosystem has matured and not become a GUI 'bother, brother, yet another' interface to someone who only commissions or re purposes their gear every couple years. There has been a consistency to it over the last 2 gens despite the 32 to 64 bit transition and the move to virtualization. The "DELL" feature set used to be more obscure and required bootcamp type training. Now the documentation and community contributions for their features is accessible, without days on the phone, solving "Hows" instead of "Whys". Dells "Whys" are nearing completion, good job.
Knowing how to do helped someone; knowing why to do, helps everyone.