Home Server Build

Hi, not sure whether this should be in Networking or Hardware / Build…
I’m looking to upgrade my server game… (I’m an amateur with a questionable set up at best).

The current setup is a prebuilt Dell PowerEdge T130 and a Thecus W5810; these are both running Windows Server 2016 with AD DS, DHCP, and DNS on the Dell… they are somewhat linked, and the setup is based around my interpretation of a set of Server 2012 videos by ‘Eli the computer guy’.

The Thecus was bought to be my home server (additional RAM installed and now on Server 2016), but when finally getting Solidworks PDM ironed out with my reseller, it would have been an ‘unsupported’ setup, so I ended up getting the Dell specifically to run PDM in a ‘supported’ config.
As it stands, the Dell is running Solidworks PDM, and the Thecus is essentially wasting electricity :-/
I also have an old Synology NAS and a very old PC running as my PFSense firewall.

I am looking to have a single box (possibly TrueNAS?) that would be able to look after network storage, backups, some home media / plex, maybe the firewall, and possibly Solidworks PDM… although PDM would need a discussion with SW to see how that could be done in a ‘supported’ config (or that may have to remain on the Dell and be linked in somehow). I’m also thinking of reusing part of the current setup as a secondary box to store less frequent backups of the important stuff on the main server.

While I may / may not be able to get through the software side with minimal help, I think the first thing is getting the hardware in place and then trialling things, but I’m not sure where to start on that.

  • The budget is somewhat uncapped – I don’t want to lose a kidney or anything, but I’d rather buy something slightly overpowered than something that barely meets my needs.
  • I live in the UK (£)
  • I think I’ve got the peripherals needed (well, a monitor, keyboard, and mouse). But, the network is currently unmanaged gigabit switches.
  • The build will be going in a bedroom / home office, so quiet running is a requirement.
  • Part of me really wants to reuse my old ThermalTake Armor case – as it’s sitting idle… but not a requirement. (Happy to build something, it wouldn’t need to be a prebuilt / barebones)
1 Like

Have a look at the AMD EPYC SoC options. These are low core-count options to the corporate server grade EPYC SKU’s and some come with mainboard attached. I’m quite partial to the Gigabyte MJ11-EC0, but at almost USD500 it’s not exactly cheap, especially as that doesn’t include RAM. And I only found it listed at one supplier in Poland. It would allow you to re-use the ThermalTake case, although the board lacks somewhat in connectivity due to its size.

If you choose this (or a similar) board, I’d suggest starting with 2 RAM sticks, an upgrade is fairly easy if needed. Put the OS on a suitable NVMe drive (1TB is currently the sweet spot for price/capacity, IMO) and have part of the NVMe drive allocated for cache (ARC, L2ARC, write cache). Theoretically you shouldn’t need to, but keeping several GB of the NVME drive spare to account for wearlevelling will probably increase the lifespan of said drive by some notable margin.

As for the OS itself, it doesn’t really matter which one you choose, as long as it’s not named after a fruit or poorly written and dressed up badly for a desktop environment. :roll_eyes: Remember, even M$ themselves run their flagship Azure cloud services on Linux, as their own OS just can’t handle it :stuck_out_tongue:

1 Like

Thanks for the suggestion @Dutch_Master . Any reason for the ITX formfactor (was that based on the case - as google may have flagged the one of the other ones)?
I was kind of thinking something bigger - I’ve got the original(?) Armour, all 15KG of it. . . so, theoretically can do E-ATX. It has 11x 5.25" expansion bays plus 2 other 3.5" bays in the back; so should have ample room for storage.

The connectivity on the ITX board is perhaps somewhat limited - I’m not 100% certain on the storage config that I want to go with, but 4 + 4 SATA and an M.2 sounds a tiny bit limiting. I could potentially get a PCIe card to take that up a bit…
At the moment, my gaming rig has a stupid amout of storage with all of my games downloaded and installed. This may stay when I update it after the server is working… or I may want to add a Steam cache server (or similar) in this build and then reduce the storage on the new gaming rig (not a fully thought through idea) - in otherwords I may want to add a few TB of space in the not too distant future.

I have so many additional questions…

  1. How much power budget do you want to invest in it?

  2. Will it run 24/7 or on demand

  3. How will you handle heat?

  4. For plex do you plan to transcode on the fly or for older clients/devices or is your media collection playable on everything that might receive the stream? How many concurrent clients that need transcoding to lower spec? If you have older devices and need to transcode on the fly do you have enough hardware encoders (GTX1050 can do 3x 25fps 720p x264 no problem at the same time), even low power because CPU won’t cut it.

  5. Why firewall? With the state of the security you’d not want such an important machine to be a firewall, because it is the first thing on the chopping block when firewall goes down, and it might based on the exponential amount of exploits every day.

  6. What capacity are you looking to have? ZFS snapshots, replication, caching, dedup, etc? How deep will you dive into insanity?

  7. Spinning rust or SSD’s (just seen this partially answered) ? And if so why do you hate SAS?

  8. Will you be running VM’s on that server? Do you consider using PCIe passthrough for any of your planned uses with any of the VM’s?

I’d avoid any solution that does not bring at least 40 PCIe lanes readily available to actual physical slots on the board. Also avoid dual CPU (>1 numa nodes whatever) because most in EATX i have not yet seen a single motherboard that got it right. CPU2 may have it’s own RAM but in most iterations it’s PCIe lanes are useless because they are not connected to any peripherial and once the process spreads across two NUMA Nodes, it’s bad time all around, especially on windows server. Any more advanced ZFS feature, you NEED ECC RAM solution, you’ll thank me later.

There are used single socket server boards both for EATX and if you feel like crafty, take a look at pulled motherboards from 4U chasis servers, preferrably SuperMicro, they are the most hobbyist-friendly server manufacturer.

I do not really believe you absolutely need to buy brand new hardware, there are a lot of savings in using older components if your needs are not insanely CPU-intensive. I rock at best 8 core Broadwell at most of my workstations and servers and it’s fine. Where I have big issues was the dual Haswell workstation EATX Asus motherboard but it’s fine now since I pulled second CPU out, not as strong but it does it’s thing well enough.

What exactly do you mean by that? If it was the physical size, that’s what Gigabyte choose to make it. If it’s about why I proposed it, it’s about availability for consumers. As said, I only found one (1) supplier here in Europe, none in the US (from amazon et all) nor anywhere else. That’s not to say there aren’t, but apparently availability for con- and pro-sumers is difficult at best. I’m aware there’s one other supplier (Asrock Rack) with a similar board but I haven’t found a supplier for that at all.

Another thing is price. The Gigabyte board is about 500 bucks *US) while on avarage a “standard” EPYC server board is 700, but that doesn’t come with a CPU included like the SoC board. Obviously said “standard” EPYC board will have far superior connectivity, RAM capacity etc, but at a price. It’s also quite less friendly to your energy bill: the SoC’s are much more power efficient over the regular EPYC CPU’s. Never mind Intel’s Xeon offerings :roll_eyes:

Using a suitable HBA you can get another 16 or even 24 drives connected to the Gigabyte SoC board, more then you could stuff in the chassis :wink:

@rehabilitated_it_guy - glad you have man, hopefully it keeps me on the right track :slight_smile:

  1. I’ve got a 1500W UPS and an older 700W(?) UPS running the current servers, NAS, etc. I’d like to think that the 1500W UPS could keep the server online for a while (would love it to be 20 to 30 mins).
  2. The box would be on 24/7. If it’s running VMs or something, then they could be more on demand.
  3. Heat may be an issue, it is a bedroom / office, cold in the winter, warm in the summer. I’ve got a fan for myself, but I’d like to use somewhat efficient parts to reduce the overall temperature contribution.
  4. I’m not anticipating having more than 1 or 2 concurrent Plex clients. Currently, I’ve got a GTX470 and a GTX980Ti spare, with a Quadro K4200 being freed up when I upgrade another system. I like the idea of a Pi based client, but not ever messed with one, so yeah…
    If I could throw one of those GPUs into the server and have lightweight clients, that may be the best way to go?
  5. Currently my firewall is an old Athlon 64 x2 system. . . so probably should be updated. But I do take your point about keeping it separate.
  6. I’d have to jump back into some Level1 or Craft Computing videos to give an in-depth answer on the gameplan for snapshots, replication, dedup, etc.
    The various personal files from the different computers aren’t massive…a few hundred GB in total. I might look to reuse the 4x 4TB WD Red Pros (RAID 10) that have been sitting in the Thecus (5th sitting spare in its box)
    The Dell / Solidworks PDM is currently taking up about 10GB of the 2x 2TB WD Red Pros (RAID 1) in that server – so, backing that up shouldn’t take up masses of space either.
    Potential Plex files would be about 500GB / 1TB
    The Steam / game side is a different matter, that’s about 6TB or so. While it may be dumb to have that backed up, if it was cached on the server, it would take up noticeable space.
    Part of me just wants to say multiple 10 / 12TB drives… but that part of me may be dumb.
  7. As above, I have some drives already that would be nice to reuse. (Not against SAS, just haven’t got any experience with it / them). Probably spinning rust with an SSD cache for the most of it.
  8. Wendell and Jeff have done a lot on VMs and PCIe pass through, but that is pretty alien to me. I like the idea, I think that’s the best way to go, but I’ll have to experiment with it… so, yeah, hardware capable of it.

Essentially, I’ve watched a load of videos, like the ideas and think I’ve got use for them. I know I need a better setup than I have (even if it’s just for backups)… and I want to learn how to do it.
I figure Step 1 is getting the hardware to play with while the current setup is running, then when it’s behaving, Step 2 will be bring it into play.
I find it easier to learn while doing, and I’m sure I’ll be rewatching videos and annoying the other side of the forums when experimenting… those poor people. :wink:

@Dutch_Master - It was just seeing an ITX server board threw me a little, I’m more used to Desktop hardware / seeing large EPYC server boards, so a tiny board just made me pause a little (and TT did make a mATX Armour case) - so kind of misread where you were going there.

As you’ve probably realised, I’ve not got much knowledge with the server side. If an SoC is suitably capable, it’s an interesting way to go - especially with the suggestion of an HBA.

In lieu of heat being a problem, I’d agree SoC might be the best way to go. Also power expense gets expensive fast, considering worldwide fuel and energy costs about to skyrocket (Germany/Bavaria should go up 50% in electricity compared from last November to this May 1st. UK no idea, but probably even worse.)

But with all said above, I’d say you’re still in the pure desktop platform territory, uATX SoC will not bring much to the table. If someone makes an Epyc SoC board with full ATX form factor, that is worth writing home about.

Maybe play with something you have on hand to master the technology and admin side of the things before you invest into production machine, I regretted many times not taking that road because I spent way too much money for the features I never ended up using and spending way too many kW/h for something that came up to be impressive but completely unnecessary. Sometimes it is really not necessary to have everything on one machine, for instance my plex server is run on a J2900 with a low-end quadro card (can’t remember which one, older model, pre P4000 era, but it draws about 90W at full blast) and off 8TB SATA drive (it synces with the media folder on my primary on-demand storage and archival server that does pull a lot of electricity). It still eats 35W at idle but I can live with that.

Thinking is saving.

If it were up to me:

  • Do yourself a favor and invest in a proper firewall, something 8-port like the Juniper SRX210. This will help with heating issues (since it draws less power), as well as allow you to partition your network for IoT, Lab, Office, Home, WiFi and DMZ.
  • Migrate active drives to SSD. SSD are now sufficiently cheap that HDDs only make sense in a cold storage setting. For every other metric SSDs are superior and the default choice for on-demand storage. Furthermore, m.2 simply takes up less space and you can really shrink sizes with it. Still waiting for a proper board with 1 PCIe x16 and 16 m.2 x4 (=64 lanes) ATX card though… :slight_smile:
  • I would invest in a good airflow case like the Meshify C for silent ops, trouble is that specific case will not let you house your harddrives properly without some custom mods.

Good luck though!

That sounds familiar… wasn’t there recently a big SNAFU with some backdoor or intentionally stupid bug with them or was it something else called Juniper. I clearly remember reading at least 6 or seven urgent security bulletins about something Juniper but can’t say for sure.

@xXDeltaXx Speaking of intentionally stupid firmwares, if you can get a cheapo router that supports OpenWRT upgrade, that’s another learning journey worth taking. Depending on what your internet connection is, do interface of choice. Usually more rewarding than getting a few ulcers with MikroTIK out of the box solutions at least in my opinion (out of the box part is where stomach ulcers gestate).

Possibly. I just pointed out a HW firewall that looked reasonably priced, feel free to pick anything above consumer space though. I worked for a bit for Westermo on their routers but those are pretty much as far away from standard RJ45 you can get.

Industry grade routers built to last ain’t exactly in the cheaper market segments…

And it is most interesting for certain entities to pick apart and/or intentionally backdoor because all of the above. I’m not saying you can’t take down an Linux based convert-o-cheapo (any flavor of Linux) that you configured badly or the firewall package or one of the dependencies is bad.

But when you have an expensive commercial router that literally accepts door knocks to open an undocumented port that is basically entrance to the whole network, that is just mean in comparison.

Most of individuals should not care if their router is backdoored or not, Windows OS fills in the gaps anyways so what’s the point. I’m just waiting for the revelation to drop how many Linux distros have been doing the same for years (Canonical, I’m looking at you)

And that’s the problem. An ATX SoC board will be so popular it’ll eat (considerable) market share from their regular EPYC sales, which are way more profitable. No idea if you looked up the spec’s on the Gigabyte SoC board I mentioned and put it against the AMD Epyc 3151 specs, but you’ll notice Gigabyte made a lot of compromises on connectivity and features from that chip. That’s entirely on purpose, for the reason I alluded to before.

I hate artificial market segmentation, that brief time I was in R&D we finally managed to drive home a project to be the best it could and then the management and marketing got involved to cut down the features in software so they can have multiple SKU’s at different price ranges and it was always the same board with same STM32F411 chip with all the same peripheral and expansion chips but it was artifically cut down into three SKU’s.

I think that is the only time I threw up into my mouth in my whole life.

If we did not do that and kept a single board at a reasonable price and focused onto making a more powerful controller based on the experience and know-how of the first one, we would have probably conquered the market blitzkrieg style because there was no such product not even similar in features installers really wanted. But since management got greedy fast, a Chinese company saw their cut of the cake, made a clone of our product, analyzed the feature set, cracked the control software, made a worse but far cheaper product based on GD32F403 and the company I worked for went belly up fast.

I use an Embedded EPYC ITX board from AsRock Rack, which is excellent, though currently out of stock:

In particular, the dual 10GbE Intel NIC is quite useful. Newegg does have a version in-stock with half as many cores, still great material for a home NAS:

You are missing the point, which is, a $300-$500 router will have better performance, better security defaults and draw less power while taking up less space.

Sure the DIY can be better configured if you know what you are doing. Sure, backdoors exist and less scrupolous companies and states will always inject stuff. Not even Linux is safe, it is entirely possible Fedora, Arch or Ubuntu has a built in backdoor, too. It is just less likely, but not impossible.

Am I saying you should blindly trust the commercial routers? No, but it is stupid to not use stuff for something that, for the most part, has a proven security track record. Even if, sometimes, even these products do fail.

TLDR; Choosing DIY for safety reasons over industry routers is about as rational as choosing to go by car over flight to avoid dying (1 in 3.3 billion for planes vs 9 in ten million for cars, per year). There are other reasons to DIY; security ain’t one of them.

@rehabilitated_it_guy @Dutch_Master @wertigon @ban25 (I guess the first thing is I keep forgetting if I need to tag people, or not)

Thanks for the various replies – this isn’t heading in the massive monolith build I was thinking… glad I asked first.

Eventually, I’ll be building a new gaming rig, then repurposing the current one as an updated workstation, the old workstation then becoming available hardware to play with.
I’d much prefer to drop all of those into a working environment, than having to migrate them into a new setup when it is eventually ready. (Plus GPU prices…)
So, as is, I’ve only got an old Q9450 (socket 775) / Abit (remember them?) IP35 Pro XE build to play with – if it still works… been years since it was powered on.
As such, I think I’ll need to be pretty sure of the intended hardware and then buy in to build / play / master all of this. The good thing is that the current shambles is functional, so I will have some time to learn before trying to roll anything out.

It does make sense to pull the Athlon Firewall and replace it with something more efficient. I’m already using PFSense (I’m sure not to the fullest I could).
There is a ‘Network Chuck’ video where he was using the https://protectli.com/vault-4-port/ (or similar) - which he notes as overkill. But something like that is interesting to me.
My ISP router is fed with a coaxial cable, and I’m not sure it can be replaced… so I probably don’t need to go overboard with this (currently, I’ve got a home network and a guest one, nothing more fancy – so a 4 port is probably fine).

As previously said, the SoC / embedded isn’t something that was on my radar at the beginning of this, so still trying to process the implications when compared to a beast of a workstation board with massive connectivity.
Wendell’s had a few positive looks at IcyDock; so some of their enclosures may be a good way of getting multiple drives into a single machine. But not sure how an HBA (or similar) would interact with them to boost the ITX board’s connectivity.
I really want to stay away from multiple cases if possible, and I don’t have space for a rack.

Currently I only have gigabit networking, yes it’s nice to have 2.5 or more… but at present it would only be this new server and I could probably direct attach to the new gaming rig whenever that gets built.

So the board I linked (both of them, actually) has 1 m.2, 1 PCIe x8, and 1 Oculink (PCIe x4) for expansion purposes. I’ve configured mine with 6 SATA drives internally leveraging the Oculink + the two onboard SATA ports, alongside an LSI HBA with 8 additional drives in an enclosure off-board. All the drives are mapped to the same ZFS pool (RAIDZ1 with 4 vdevs) on Ubuntu 20.04.The HBA was purchased from Art of Server on eBay:

That gives me a 140 TB of NAS (raw, before redundancy) on an 8-core EPYC with dual 10GbE in a pretty small form-factor and power envelope.

I too would like to build a Home Server. I hope I’m posting this in the proper place. If not, should I start a new topic? let me know.
I’d like to run a Linux home server, I think LAB is what the cool kids call it. It would help me get into the backend of development. Currently I support the front end of a very large Facility Management application with a bit of XML and JS development. I don’t have room for a server rack but a nice ATX or eATX case sitting next to my workstation would be fine. I want to run different versions of Oracle DB and/or SQL Server plus Tomcat 8 and/or Tomcat 9, plus maybe GIT and JIRA to create an entire closed environment. I see this as a 2 parter, 1. How to determine hardware requirements and 2. Choose software setup. Should one start with the 2nd question first in order to answer the first question? I’m learning about containers vs KVMs. I assume my first consideration should be whether SQL Server, Oracle DBs, Tomcat, GIT, and JIRA can be run in containers or just go with KVMs. Then determine how much hardware, cores, ram, storage is needed. I’m considering multiple VMs so that If I screw up an installation, I don’t have to redo all the other apps, which I can easily do and have done.
Any recommendations as I’d like for all these to run at the same time:

  • 1-Oracle 12c VM
  • 1-Oracle 19c VM
  • 1-SQL Server 20XX VM
  • 1-Tomcat 8 VM (Running multiple Tomcat instances)
  • 1-Tomcat 9 VM (Running multiple Tomcat instances)
  • 1-GIT VM
  • 1-JIRA VM
  • 1-fun & games VM (Learning new things)
    = ~8 VMs + Host

Math…
2+ cores a piece?
8GB ram a piece?
< 200GB storage per VM (1TB NVME for OS + 2TB separate NVME drive for VMs)

Is this true…
The host needs the same number of cores and ram as the amount of VMs running?
So,
16 cores for VMs + 16+ cores for host = 32+ cores overall
80GB ram for VMs + 80+ GB ram for host = 160+ GB ram

I’ll add…

  • No overclocking
  • CentOS 8 Stream (coworker recommendation)
  • No Mining or gaming
  • Currently running 10 core Dell Xeon Workstation with VMWare Workstation on Windows 10 Pro with one CentOS VM that keeps running into a brick wall. Hence desire for standalone server

With all of the options and variations, where does one start?
Any and all advice would be greatly appreciated.

Not even a half-height rack? You could perhaps get something like this which would be ideal to fit a 4u gaming system + a few 1u blade servers, beware the noise though:

If space is a concern, how about going SFF with mostly mini-ITX systems? You could, for instance, order two n-ATX or p-ATX cases from SFFtime, they don’t come cheap but sure saves space.

Other interesting options are Velkase, J-Hack, Lazer3D and last but not least, Sliger.

Regardless of form factor, I still recommend a separate firewall, rest could be in a single server.