Return to

Home server


Howdy fellow techies

Long story short I’ve finally been able to get on the housing ladder and can’t wait to fill it with new toys

My plans for the computing side is to do away with my trusty old desktop and have a single high end server with multiple “nodes” my thinking is it will work out cheaper to have a server with vms but then again I may be wrong depends on requirements right!

The ideal system will require around 4 vms

  • CCTV NVR vm perhaps something like Kerberos (any alternative recommendations welcome) id plan on reguarly checking the footage and a record on motion feature to minimise required storage requirements. Not sure about what cameras maybe something like a pair of hikvision 1080p ip jobbies, be headless with remote web interface management

  • Gaming vm - been out of the gaming experience since i moved back home to save up so ive got a backlog of older less graphic intensive games to catchup on so i hope i can recycle my 7870 from my current rig for the moment. I’d love to tinker with gpu pass through to a win10 vm but am new to the game so not sure about prerequisites

  • Lounge/Media vm - Nothing fancy got a reasonable media library would love crunchyroll integrated, haven’t got a smart tv so not looking at a Nas media library so maybe a linux distro with say kodi. Literally media and light web browsing

  • Spare - future use for routing pfsense, nas etc

I love the idea of vdi’s but i think its out of budget and could introduce lag to the gaming experience, all the rooms are within 4m of where the server would be placed so HDMI and usb theoretically could be routed? Perhaps a budget sbc used as a vdi is available?

Any thoughts on possible distros, hardware and software would be greatly appreciated even if only pointed to other threads

Managed to pickup 2 CT16G4RFS4293 dimms on the cheap to start

Looking forward to the knowledge of the tech gurus



I don’t think you’d really save any money with that setup. Your server otherwise probably would not require a GPU, so you’d just stick it into the server for the h3ll of it. And then have to route either multiple cables or thunderbolt.

As far as kodi goes. Personally, I didn’t really like it. Mostly for the interface.

There is also a crunshyroll plugin for plex appearently, though I don’t use crunshyroll…

You can go kodi if you prefer it ofc.

What does sketchup do that it needs to be on a seperate VM on a Server? Is part of it actually server software, or are you just stuffing it into yet another desktop VM?



Yeah plex could be an option, I only mentioned kodi since I’ve played with it a little on my phone and used upnp. Do you know if it offers user and tracking facilities since i watch stupid amounts of shows and loose track haha

Yeah scratch the spare vm for the garage as it would most likely be pointless since i cant be in two places at once haha. At most it would be 3 vms as it currently stands… I would like to at a later date add a router vm such as pfsense so I could tinker with dns adblocking, caching etc

I think I’ve mislead you perhaps with the gaming rig term? I would use the server to game on with a win10 vm + gpu pass through. Unless you are suggesting a different solution to a dedicated gpu?



It can track what you have watched. Additional Users are a paid feature though. (You can however share the library with another email address, you just cannot make users like they would be on netflix).

I’m not sure however if the crunshyroll plugin also supports tracking what you have watched.

No, I’m just questioning wether you’ll save actual money with that. Or if you would not maybe be better of sticking the GPU into your trusty desktop instead. Or do you want your desktop to be your server?



Yeah the server would be replacing my desktop, it’s a 4170fx, ddr3 32gb, 7870 so getting on a little… the server would live in its own room so the summers don’t create a sauna when gaming

Id route an HDMI and a couple usb ports to the rooms so literally keyboard mouse and monitor



This here looked pretty interesting. Worth checking out I think.

Ryzen 3000 should technically launch today.

There may be a x570 board around too. Not sure. This is the only ‘server board’ I’ve seen so far for ryzen. It having an onboard gpu alone makes it kinda neat. So you should be able to do paththrew while still having a gpu available for all the rest of the system.

Now, it’s not really ‘high end’ as far as servers are concerned. But I’d think up to 12 cores 24 threads (16core at some point) 64gigs of ram is plenty of ‘high end’ for a home server.



Now that’s a find! Even supports registered dimms mine are 2933s but it shouldnt cause issue. I have also been keeping an eye on the new mobos yet to find one that supports ECC rdimms looks like threadripper and Ryzen pro only support ECC? Pcie 4 would be nice too seeing as the 5700 and xt are looking pretty tasty…



Not to burst your bubble but these are incredibly hard to find now. ASRock only made a few thousand and as far as I know they didn’t make any more after the initial run. I know they are making a refresh with 10GbE if that matters.

You can find them on eBay occasionally but they are sold fairly quick.



That sucks, but maybe that gives them incentive to build more.

Do board partners get any $ from intel/amd to build server boards for their server class CPUs as opposed to consumer once?

Otherwise I dont get why this does not happen more frequently. There sure are plenty of small businesses and consumers who would be more than happy with x470/x570.



Well if the upcoming Epyc lineup is anything like today’s I think my cpu is decided… perhaps say anything from the 7252 up to a 7282.

From what I’ve found and you’ve said I’m not going to get a threadripper or Epyc Mobo that supports rdimms I’m limited to server mobos… but I guess I already deep down knew that



There is always the whole supply vs demand factors, there are small businesses who might opt for using a higher end board for a server or just opt for whatever “pre-fab server” such as Dell or HP. AMD lost a good chunk of entry workstation/server board options while their CPUs weren’t competitive, ASRock more than likely did a production to test the waters of Ryzen and may re-attempt with the 3000 series. I just wish ASRock made a Desk Mini AMD model with dual or quad NICs… slap in two SSDs and it would be a spiffy mini-server :grin:

From a work point of view, I wouldn’t build my own servers mostly due to the nature of OEMs who don’t like to maintain their BIOS/EFI and I’ve met a few motherboards(AMD & Intel) over the years which didn’t play well with certain RAID cards.



That’s not really a big problem (for me anyways). Software raid is fine (partially because you are not dependent on a piece of purpose built equipment) and if you’re gonna use ZFS the raid card discussion goes out the window as well.

The PC guy at my parents work told me they use HP servers in part for their raid cards. Because they are reliable and don’t use CPU performance. I really don’t get that CPU performance part, since I run mdadm on my home ‘server’ with 4790k. It does not even begin to work to saturate my 1gbit LAN. About reliability I’d say it’s pretty reliable to have one less thing to worry about. But maybe you have more persuasive stats to show off about that.



I don’t know how HP servers handle software raid vs hardware it could be possible they require hardware cards for certain specs, ever since they went into requiring “service packs” for BIOS/EFI updates there has been many who avoid them. I do know with Dell Power Edge servers if there is heavy database load or speed specific tasks your mileage can vary–4 drive servers the speed loss isn’t big but models with 8-drive bays it can matter and a card with cache helps.



Fair enough. But then i look at my parents work place. They run raid 1 with another raid 1 external WD thingy for backups. That’s effectively less drives than I run at home and they have a raid card “because it’s the only way”.

Though, the part where “I’m not gonna build my own server for work” makes total sence to me. While I’m very motivated to do that at home. i don’t wanna build many of them. Plus, I built my home server duo to circumstances mostly. Having had a 4790k in my desktop before. A business isn’t going to do that most likely.



Orly? Are we talking mobos or barebone/prebuilt servers? I’ve never had an issue with gigabyte or Asus I’m not really looking at doing hardware raid



I was talking about motherboard makers, some boards can have weird issues with simple SATA cards and switching to another SATA chipset maker is usually the work-around–there is a reason why ASUS & Gigabyte typically use ASMedia for extra SATA ports on their higher-end gaming boards. Keep in mind I’m talking about if someone needed more SATA ports to run extra drives or use a hot-swap bay.



I don’t think your experience would be that great with VMs and remoting. For example, for the media PC, instead of a small PC remoting to the server, you’d most likely be better off playing media directly on the small PC. You can just use a Raspberry Pi 3 (or if you’re patient, buy the better Pi 4, the 2 or 4 GB version). You could also get away with a cheap build, or even 2nd hand using old components for it, 1080p and maybe even 4K media playback shouldn’t be a problem for an ASRock J3455M (this is the motherboard I put in my pfSense router). You could even get away with a 2nd hand old dual-core Celeron or Athlon II build with a cheap GPU, like a GT1030 or a Radeon 6670 (I got both of these, they’re both good for 1080p and the GT1030 for 4K Youtube playback).

I’ve got a Hikvision 1080p camera at work and it’s set to offload footage on a storage server and delete everything every 30 days. I’m not sure how much storage it uses, but you shouldn’t worry too much about it, if you enable motion detection (ie: the camera will only record when it detects movement). If I don’t forget to look into it, you should probably expect to multiply by as many cameras as you have and to divide by 2 or even 3 (there’s a lot of going back and forth in the camera’s orientation, so it records probably half the day).

Also keep in mind that with 1 server, you got 1 point of failure. Too many things done simultaneously on the server and everybody starts lagging.

GPU Passthrough is more for the people who want to run Linux as their main OS and Windows as a VM. But that’s for a main build. I tried playing games through both RDP and VNC on the same LAN and the performance was terrible. Maybe it was something on my end, but I don’t think so. I somehow did a little better when VPN-ing home from 2 other locations and trying to RDP game. It wasn’t smooth, but it was enough for what I needed to do. Again, I suggest you do a main build.

I read guides about pfSense and basically everybody said to just use physical hardware for it, since the speed on virtualized NICs was terrible. Not sure how up-to-date that info is, or if it applies to home routers, but I still have a 15W PC (the ASRock mentioned above) running 1 WAN port and 2 LAN ports (I got 6x 1 Gbps ports, because I’m dumb and didn’t plan out from the beginning and also didn’t have a lot of network knowledge - not that now I do, but at least it’s a little better).

VMs are fun, but I don’t think that 1 expensive server and many thinclients will be cheaper than many cheap PCs. Unless you want to run only passively cooled PCs or pico form-factors, like Raspberry Pi, Intel NUC, Gigabyte Brix, ASRock Beebox, MintBox and so on and have all the loud compute be done in a server away from your ears, then having a server, or even multiple servers in a rack that you remote to, is not that big of a deal (and definitely not cheaper than buying more cheap PCs).

Since you already bought ECC memory, I guess you will be going forward with the project. For a home hypervisor, I recommend you either go with Proxmox or XPC-ng (basically the full version of XenServer) or install Fedora and go with Virt-Manager. The first 2 options will make your life easier with maintenance, because you do most of the work through web interfaces. For Fedora, you either SSH into it and run virsh commands, or VNC into it and use the GUI for Virt-Manager. If you want to PCI-E passthrough in the future (like a GPU for a Windows VM or a 4 port PCI-E network card to a pfSense VM), I’d recommend Fedora. You can do it in Proxmox, but I don’t think there’s as many guides for it as there are for Fedora. So you should pick your poison.

For the motherboards, I’m very ignorant. I’d suggest you get your hands on a 2nd hand server, like a HP Gen8, but since you bought RAM, I guess you want to DIY. For a build, I recommend you go with Threadripper 2. I think a 12 core part (2920X) should suffice, unless you want more than 1 performant VM, which then, you should go with a 16 core part (2950X). I can think of this scenario:

  • Main PC: 8 threads, 8 GB of RAM
  • Main Windows VM: 8 threads, 8 GB of RAM (you said you don’t have newer titles)
  • pfSense VM: 2 threads, 4 GBs of RAM (2 is probably enough, but maybe having more than 1 PC, 1 tablet and 1 phone would use more than 2? let’s pretend it’s future-proofing)
    I’m using around 1 GB of RAM for these devices.
  • HTVM (Home Theater Virtual Machine? Is this even a thing?): 4 threads, 4 GB of RAM
  • Storage VM (I’m only familiar with NFS and Samba, since you most likely want to access from a Windows VM, go with Samba): 2 threads, 4 GB of RAM. I guess you could use this storage to also keep your video surveillance recordings.

Around 1 thread and 2 GB of RAM would go to the hypervisor, but you’re still left with a few resources for 1 or 2 smaller VMs. But you could also give some VMs more threads, but less maximum CPU usage (like for example, give 4 threads to the HTVM, but limit its CPU usage to 30%). Depending on what you’re doing, you might not even need to allocate so many resources to some VMs.

I skipped the motherboard for the TR 2920X, I believe ASRock x399 Taichi would be a good choice. (don’t forget to use the Level1Techs promo link if you’re shopping on Amazon:


Your build is really one of a kind TBCH, I personally wouldn’t build one server to make VMs for every room that I spend time in. Well, I’m a low-power freak and although I love Ryzen and Threadripper, I personally don’t need them.



Yeah… it’s the equivalent of me buying a flashy car I just want to build something cool and play around with it and learn a few things on the way hopefully!

I’ve enjoyed watching wendel and Linus etc playing around with vms Linux and passthrough so I’d just like to start that journey tinkering myself. Just a long-term dream kind of thing

Unfortunately looks like the taichi is unbuffered memory’s only :frowning: I appreciate the fedora recommendation tho still not 100% on what distro to go for so that might help narrow the choice

I assume the poor nic performance is overcome with passthrough of a nic instead of virtualized?

I feel like I’ve shot myself in the foot getting buffered but he only wanted £50 a stick I couldn’t help myself :sweat_smile:

As for the point of failure I totally agree then again if my desktop went down in the past I’d be computerless since I’ve never had the money for more than one haha

Perhaps setup my old desktop as a backup in the cluster :smile:

1 Like


That’s what I’m also guessing (which is why I mentioned it in my previous comment).

Ugh, that’s unfortunate. As I said, I’m ignorant when it comes to motherboards. Wendell did a lot of x399 reviews… try looking through a few of his videos.

If you want to start playing with virtualization and servers, I suggest you only do 1 PC and run a Linux distro as the host and have a Windows VM as a guest, with GPU passthrough. I’m running Manjaro and just today I managed to start Windows (unfortunately, my GPU driver crashes, not with error 43, but something about “nvlddmkm stopped responding and recovered from a crash”, but it doesn’t recover, the screen stays black, I’ll have to reinstall GPU driver cleanly). Then, just do a storage server on an old PC and do a software RAID1 or RAID10. I also recommend a physical pfSense box, since I also recommend you buy a gigabit switch, to learn some stuff.

Actually, I have a better idea. If you got 5 SATA ports, use one for a small SSD and 4 for a RAID10, if you only got 3, go with RAID1. Put Fedora on the SSD, add a PCI-E network card with 2 ports and passthrough it to a pfSense VM, that should also be installed on the SSD (make the image qcow2, not raw, so you can do snapshots). Mount the RAID and configure Samba on the RAID.

Your primary network cable should go to the pfSense WAN and the pfSense LAN should go to a managed switch (20 ports should be enough, make sure you get one that has a few PoE ports, to power the cameras, PoE injectors are janky). From the switch, get one cable to the integrated LAN port on your old PC (for Fedora), one cable to your new PC that you will build (Fedora host, Windows VM), 2 cables to 2x WiFi Access Points, 1 cable to a HTPC and the rest of the cables to your surveillance cameras.

This should be a fun project. As mentioned before, the cameras can offload the footage on the Samba server. I also recommend a hefty UPS, or 2 smaller ones: 1 for the old PC Samba / pfSense server, 1 for the switch (and obviously cameras), since you want to record video even when somebody cuts the power to your home.

Well, instead of 1 big, hefty server, you will have 1 new main PC, your old rig acting like storage server and router, 1 HTPC and a new switch. I say for now, it’s enough to learn, have fun and also have something to brag about in your CV.