Return to

Server Build

Looking to put together a server. (Dell?)

The system will be Ubuntu, running virtual machines. Biggest resource hog is MySQL - and I am trying to configure the sweetspot for CPU and drive speed.

Is there information that I can be pointed to to help me understand the differences / performance pros/cons with intel vs epyc?

We’re not looking for bleeding edge, but rather the current goldilocks performance vs price point as it stands today.

128GB Ram, 2TB Raid 1 for now. (expand in future), 1GB Network. Currently I am conflicted as to whether mysql likes Ghz vs Cores. And would love this very technical communities opinions in that arena.

Currently - we’re using AWS with 96GB, 128 Cores - and the only major spike we see is 900 iops and network. (network will resolve due to the virtual hosting). And the iops should resolve with an NVME SSD solution.

Can the group here offer suggestions as to what server chassis would fit the above needs, and opinions for EPYC vs Intel.

Thank you so much.

Whats your budget?

I’ve always thought that high IPC and/or high GHz is preferred for database servers (especially when you have to license per core)

I think that throwing more mores at something doesnt always help depending on the workload, unless you have a database software that parallelizes the queries well. (but more cores should help when more clients are connecting)

Really, imo, i think coming up with something to benchmark your workload against so you can compare your options before committing to a large purchase.

HPE DL325 or DL385, both very attractively priced right now. The 1U DL325 would do fine for a database… if the server is going to do some other things maybe down the road, the 2U 385 offers more room to grow. My Dell enterprise rep tells me Dell’s working on more AMD options but that’ll be Q2/Q3 2020, so HPE for the win right now.

I agree with @nx2l that a database server should have the highest IPC your budget will allow. I’m not super skilled with mysql but under Microsoft SQL, i’d choose clock over cores, BUT my first priority would be RAM. Sure NVMe disks are fast but they can’t match the speed of RAM. Flash arrays are still rather expensive, and RAM is crazy cheap.

Cores seem almost a freebie now a-day, every cpu has so many cores!

Are you looking for DIY solution or enterprise? What’s the budget for the project? Is this going to be physical or virtual host?

SQL like clock over cores. And license cost on the cores are not cheap with SQL servers.
May people turn off HT to save on license and improve IPC on SQL server.

We’d like to stay in the 8k-12k range all in. With some flex for upgrades down the road.

We expect the database to get much larger over time, so expanding storage space will be important.

And, the idea of lots of ram faster than hard drive. Sounds like a good idea.

That’ll build you a nice server. AMD seems to be pushing heavy discounts on EPYC right now, great time to act. So, are you building or buying?

This works at home, but it’s real hard to do in the Enterprise, unless by upgrade you just mean replace.

Buy as much ram and whatever CPU(s) you see yourself using 3 years from now. I know it’s harder to sell that to the bosses, but taking the server down to swap ram sticks or replace the CPU just kills productivity and that costs money too… not to mention that stuff usually gets more expensive after initial release, and will stay that way for 6-8 years until the model goes out of mainstream support.


Good news is I am the boss. So I have to sell myself.
Here is what I am struggling with:

nvme drives seem to be the fast stuff. Much want.
lots of ram. No problem. Easy to do.

Hard to configure a high speed Ghz cpu on the dell/hp sites. Either that or I am blind.

Yes, they don’t make it easy. AMD partly to blame as well with nonsensical numbers for the products.

Check out

You can see the speeds of the various Rome products and know what number you want.

Do you have an HPE partner you work with? They can usually help with the CTO (config to order) process.

Oh check out this configurator website, it’s fun to play with

First I will have to apologize for talking out of my ***.

On to the software side, I’m not sure what your reasoning is for running Ubuntu as a host OS instead of something like Proxmox (which I highly recommend, it uses Debian as a base) or a cloud hypervisor on top of Ubuntu, like OpenNebula, but I won’t really question your choices, just pointing out some options.

Now, for the system. I’m not really into server hardware, but what I can tell from AMD’s own webpage is that all EPYC Rome models are running up to 3.2 - 3.4 GHz (and that is boosting), so not much of a noticeable difference in clock speeds. Intel might be better if you can find fewer cores and higher clock speeds, but it seems that they don’t have much of an offering either, the fastest CPUs I can find are some 10 core 20 thread 3.7 GHz (turbo) Xeons, so I can’t say for sure. Architecturally, AMD seems to hold the advantage with Rome and pricing should be better.

NVME SSDs can help with iops, but you will be giving away hot-swap (well, I think there are some concepts / early adopter NVME hotswap out there, but let’s be serious). Just pointing it out… IDK, it looks like you won’t be using more than 10 drives, so I can’t recommend insane configurations, like Striped Raid Z3 with sata ssds, but even with raid 10 with 4x sata ssds, you will have a lot of iops (compared to the 900 your currently have) - probably in excess of 30k iops.

Then, a few questions come, like, how many vms are you planning to run? What vm specs? Will you be running them on the same ssds that you are running mysql or do you plan to have separate location for them? Will there be other drives inside this new server or will you run the vms from a NAS / SAN? There are a lot of variables.

Another thing that I don’t know if you checked is Amazon’s documentation about vms running on ssds:

“The following instances support instance store volumes that use solid state drives (SSD) to deliver high random I/O performance: C3, G2, I2, M3, R3, and X1”

“The following instances offer non-volatile memory express (NVMe) SSD instance store volumes: C5d, I3, I3en, F1, M5ad, M5d, p3dn.24xlarge , R5ad, R5d, and z1d”

Are you even using all those core in aws? Try running a vm with fewer cores on an ssd instance and check performance.

Long term pricing might not be very beneficial and you would still be bottlenecked by the Internet connection between you and Amazon, but if you are not filling that mysql db strictly from your intranet, Amazon’s internet connection might be more beneficial (I think you have a 10G pipe between vms and at least 1G to the internet, I don’t remember).

If you are buying a server from Dell EMC or HPE, then you don’t need to study the chassis…

Excellent information here @Biky. Thank you.

Your info about the iOps. NVme is just so much faster, and hot swap is not really necessary.

I will be using different drives for the non critical VMs. The one I really need speed on will be the mySQL server. All others will be some sort of apache2 website or file server. (Not a bottleneck).

I think based on your information, the 4x sata option seems to be more than enough. And the EPYC route, should get me some Ghz for the MySQL demands with enough cores.

I am thinking a 7371 might be the ticket.

As to OS - Ubuntu as that is what I am most familiar with. Care to mention more about ProxMox or OpenNebula?

Now, to just find a system that has the config I would like to build.

1 Like

OpenNebula is a software stack (like OpenStack) aimed at making a cloud infrastructure. It has some interesting features, like VM template history and more, but unless you want to make a datacenter that you want to give access to more than just sysadmins, I don’t recommend it. We had some problems with some bugs (not sure if they are fixed or worked around in newer versions) so we migrated to Proxmox.

Proxmox is a hypervisor more like VMWare or XCP-ng. If only sysadmins will administer the servers, it is easier to work with than a cloud stack and adding new hosts in the cluster is as easy as installing it, configuring date and network, going on the web interface and adding a generated code from an existing node. It’s also easy to configure high-availability groups and live migrate VMs and even live migrate VM disks or converting them on-the-fly. Proxmox is neat, I could go on and on about it and how it never stood in our way.

As for the server you buy, make sure that if you go with a RAID controller, that it should support TRIM, or if you go with ZFS, make sure the controller runs in HBA mode. If you buy a PCI-E x16 to 4x M.2 expansion card, I think ZFS or software RAID will be the only option (but I could be wrong about the last part). You could also call HPE or Dell EMC sales departments, explain your need and ask for some configurations for your specific work. It may be a little painful at first if you are on the phone, but after the sales department sends the details to the configurators (the people who know what they are doing, I had an opportunity to get hired by HPE in this department), they should come with some specific models, then the sales dep will send you the config and price. It may ease your work on choosing a little.

Thanks @Biky!

I am actually speaking with some of the configurators.

So far, I’ve been quoted an intel system with a Xeon Gold 6234 (Dual CPU) vs. an EPYC system with a 7261 (Single)

Curiously, the pricing is only about $2k apart. While the CPU price at google prices, is about $4k different. (7261 at $700, 6234 Gold Xeon at 2405 each). This does not mention the 12 sticks of ram on the intel build vs the 8 sticks on the AMD build.

So - suffice it to say, it feels like I am getting handed a bill of goods to write someone’s bonus check.

Please, tell me if my math or pricing compare is off. As at the moment, bad taste in mouth = TRUE

1 Like

OEMs may either have better deals on some of the CPUs than what you can find online, or they may have invested more to engineer one of the servers, there are a few variables why one option doesn’t match up the cost.

The Xeon Gold seems to support higher memory frequency (2993 MHz) vs 2666 MHZ on EPYC though. AMD has 8 channel memory support, improved NUMA architecture and has PCI-E 4.0. But The Xeon Gold 6234 has 3.3 GHz base clock and 4 GHz boost clock, vs the 2.9 GHz boost on AMD.

The battle seems to be between AMD 1x 8 cores 16 threads vs Intel 2x 8 cores 16 threads with higher frequency. Does the Intel platform come with more or faster RAM? What about AMD? Which one is cheaper? The AMD one?

If Intel has more RAM capacity (say 8x 16GB on AMD vs 12x 16GB on Intel), then for only $2k, going Intel might be the better deal, even if you lose on PCI-E 4.0 and better NUMA Architecture.

EPYC is the sexy new toy, people are paying a premium for it. Also volume like the previous poster indicated, they move a lot more Xeon than Epyc, means better prices from the manufacturer.

CPU clock speed doesn’t automatically equal faster IPC either.

So far, the DL385 is looking pretty good with a EPYC 7302, 128GB Ram, and SSDs.

Dell does not seem to have access to the 7302’s, and is pitching the 7351.

Based on benches, that’s quite a drop off in performance. Am I misreading this?

@gordonthree so far, things are looking like your call on the DL385 is looking very good. Dell does not seem to be able to provide a full range of EPYC support.

1 Like

Sounds like a nice config. Funny that Dell isn’t able to source the 7302, I guess they just don’t care about the entry level.