Building an alternative to Dell file server/Windows DC

Hello PC building gurus!

I wanted your thoughts on building a machine for this role and the pros and cons of a custom built server over something from Dell. To me, it seems that we can get a similarly capable machine for much, much less. Basically, the only thing it is going to do is store CAD files for my buddie’s fabrication shop and be the Windows Domain Controller for about 10 users. It will also be backing up data offsite at regular intervals throughout the day and making local backups at night. I would like a RAID5 setup (I think). I understand how RAID works, but I don’t know ANYTHING about purchasing RAID cards other than from messing will Dell servers that come with them. Some input on this part is my main concern, as well if you think it is worth it to even go this route. He just wants to keep costs low and I know his guys would appreciate some nice equipment (place they are leaving is full of ancient technology).

This is where I’m at so far

https://pcpartpicker.com/list/s328xG

To add, I don’t know what you guys think of Supermicro but I suppose that’s an option as well. I’ve never purchased from them though.

The main thing, above all else, is that when you buy a brand new server from a company like Dell is the warranty/support plan you will buy with it. You might save some money building your own, but if a component breaks and needs RMA how long can you afford to be without it? If you buy the right kind of support from Dell you might have an engineer* onsite the same day with a replacment part.

Basically work out what your RTO objective is worth, if you can run the business without the server for a week or two building your own might be worth it. Being without a DC could be a total PITA though.

*or highly trained moron with a screwdriver if you are unlucky.

EDIT - looking at the components you have picked that is cheap workstation, not server territory, no wonder it looks cheaper :smiley:

EDIT2 - SPAG

2 Likes

I agree.

If this is going into business use, get something with an all-encompassing warranty.


We use supermicro in our datacenter (xeon v3’s) and from memory, we have about 80 chassis at the moment. They’re excellent. Prices are fair, hardware is reliable. We’ve only had one major hardware failure over the just shy of 3 years we’ve had the systems and they shipped an entirely new unit to the DC overnight.

3 Likes

What would be the difference in this type of hardware and what’s in a Dell tower? Besides looking archaic?

Their existing “server” is an HP workstation with a Core 2 Duo lol.

Specced a PowerEdge tower server from Dell and that for sure rules out solid state storage. They want like a dollar a gigabyte.

Trying to look at Supermicro but their website makes me wanna throw up.

The difference will be beneath the surface, as it were, with parts that should have been specced and tested far harder than average desktop components. The more expensive PowerEdge servers will also support stuff like ECC ram, RAID adaptors and probabaly remote management and hotpluggable drives and power supplies etc.

The best bit would be you are not on the hook to fix it when it breaks. Personally you could present the two options with pros and cons, but remember to agree a fair rate for your time - evevn if it is for friend else you’ll grow to resent all the ad-hoc support you’ll end up giving them :slight_smile:

I think I’ve found a build for a reasonable cost from Dell albeit with traditional drives. Been a while since I’ve ordered a system from them and was thinking it was going to be more expensive. Any qualms with the S130 raid controller? When I select the PERC add in card it doesn’t allow me to check out. I guess it’s not compatible.

Could be, I’m no longer all that familar with server hardware these days. I can confirm that Dells website can be annoyingly quirky though (at least the UK one was last year) and I had to call them to get a laptop configured correctly.

Hopefully someone will come along who can help with particular hardware components.

If it’s just for single-part CAD files under 1GB and you’re not running critical infrastructure off it, a workstation is fine. But, don’t use SSD’s. There’s no point unless you’re going to invest in a 10Gbit CAT6 network. Endurance is also an issue.

Let’s save some money first:

  1. Ditch the 2700x. You don’t need the cores for a fileserver, domain controller, even a small webpage. You need about 4 cores (1 for fs/share, 2 for windows server, 1 for other stuff/host os). Take a look at the 2400G. 4 cores/8threads and built-in graphics. Don’t try to build headless on a consumer board. They are missing too many features that make that reasonably acceptable on server stuff.

2A) Swap the 3 SSD’s for 5 HDDs of 3x capacity each (total 5x3TB or 15TB total) . Use a ZFS RaidZ2 (9TB usable) on Linux. With so few hard drives, there’s no reason to invest in a raid card. You don’t want to buy raid cards cheap and onboard SATA will be fine for this purpose. Run Windows in a container of some type. You can set a Windows VM to automatically start in Gnome a one-line chrontab job.

2B) Add in two more small SSD’s (~120GB) for host OS and VM OS. Don’t try to run them off the ZFS pool. If you want to go nuts, get three and mirror the host-os. (you can back up the VM’s to the ZFS pool). You do not need a raid card for this. Do everything in software. This way you yourself can clone the smaller SSD’s into a new system at a moment’s notice and debug the primary server if something happens (it probably won’t for 5 years).

  1. Cut the RAM in half. This particular system doesn’t need it. You’ll need that money elsewhere. You want ~1GB RAM for 1TB Storage. Ryzen supports ECC, though not officially. Frankly, bit corruption on DDR4 is so low, it just doesn’t matter much for non-critical (life support) systems.

  2. Get a bigger power supply. Look for an 850W Platinum. You won’t pay that much more and that headway will be nice.

  3. Buy all the optional fans. Server fans can run high rpm forever. Consumer fans can’t. You’ll want slow and easy for 5-8 years. So install ALL the optional fans, balancing intake and outake. Do not water cool anything. The supplied cooler for Ryzen should probably be OK. Though, it hasn’t been tested enough to know for sure. You should be fine and they are easy to fine if you get a fan error or overheating.

Yeah. Now that’s done, let’s talk about redundant architecture. The failure rate is going to be a little higher in this system than a server. You have options. 1) Build two (overkill). 2) Buy the Dell server (over priced). 3) Build redundancy in the LAN (hive mind, baby).

Server architecture is often built on thin-client thinking - all the other computers are crap and going to die, the server must live! Well, that’s limited thinking. You can build failover protection into the ‘network’ by beefing up your friend’s workstation. Buy a couple high capacity drives (2 10TB should be fine). Throw them in your friend’s workstation as a raid 1 and sync the ZFS pool to it. Couple ways to do that, but I bet you’ll likely add another cronjob on the server and some fiddling with your friend’s computer (minor, mostly safe, fiddling, though). Backup the OS mirror, VM drive, and ZFS pool over the LAN (could launch a linux VM and rsync it i think). That gives you three point OS and VM redundancy (only 2 locations, though - remote still needed esp. in a shop) and two point share redundancy. Should cost less than the build you have spec’d out.

1 Like

I agree with vbrandon, especially on running windows server AD controller in some sort of a VM. You can back it up very easily and also move it to another system if needed much more easily then moving if it was a bare metal install.

For the case, why not go with a Newer Define R6 rather then a XL R2? They are around the same price at retail.

i agree with vbrandon as well but if your systems and operations are crucial mirror your redundant server (s) and backups (scheduled for off hours) should be performed daily
a server farm can never be too big especially if you are doing any cluster computing!

also if your system uses hot swap trays consider having a redundant copy of the os drive.
backed up.
should your os drive fail you can be back up and running in a few minutes by swapping out the drive