First home lab advice

Hi All,

I am building my first home lab/server and would greatly appreciate the advice/insights of this community. I have been researching this for a while but keep finding conflicting information regarding parts and am feeling a bit lost.

I have a preferred budget of around 5000 USD. Obviously if the costs can be reduced that would be preferable but not sure how feasible that is given my requirements. As you will see below, my currently specified build is significantly higher than this. If I have to outlay this and it is actually cost effective to do so then I can proceed with this, but I’d really like some advice as to whether I have made any mistakes or errors in the pricing and part selection. I live in Australia so some parts are header to obtain online or as second hand.

Here is an overall summary of my currently specified build:

Build and requirements summary:

I currently have a router from my ISP with a home broadband internet plan. I can connect to this router wirelessly throughout my home from eg. my M1 Macbook Pro (which I do most of my coding and work from), my iPhone, etc. I want to deploy infrastructure in two rooms: a work room and a server room. Both rooms each have a single ethernet connection socket in the wall that connects to the router.

In the server room I want to connect the ethernet port to a network switch. I will then store two main servers in this room (at first…) along with a UPS. The first server is a Computation Server and the second server is a Storage Server.

The Computation Server is for running a range of projects eg. Machine Learning modelling, data science projects, some gaming etc. These will be run by a Proxmox hypervisor with various virtual machines being operated. Most projects will be in linux but the gaming for example will require a Windows virtual machine.

The Storage Server will run ZFS with one zpool consisting of multiple RAIDZ2 vdevs each containing 5x 18TB HDDs for a total of around 50TB of usable storage each. To upgrade the storage I will need to create a new vdev each time and add five more 18TB HDDs to boost the storage in increments of ~50TB. To accomodate for these expansions I want a RAID card/HBA that accommodates ~25 drives (either directly or by using a SAS expander). A case that incorporates enough hot swappable HDD bays would be ideal.

I want a simple monitor and keyboard in the server room to debug and setup the servers. This can use a basic KVM switch so I can do this for all machines, even those without IPMI support. In the work room I want to connect a mini computer connected to a keyboard and dual monitor that gives me another access point to my servers. I think I want to connect this directly to the ethernet port in this room and then use remote desktop software to interact with my virtual machines over the network. I would like this link to be low latency and high resolution to enable some gaming as well as general interaction with the VMs. I mostly work and code from my Macbook and would like to primarily connect to the servers via wifi from there. The Computation Server will have access to the Storage Server over the network via the network switch using ZFS datasets and SMB or NFS shares. My server motherboards should support IPMI for remote setup/control.

I would like to be able to remotely access my home lab from other locations eg. whilst travelling. This would be through a VPN (using a mini computer with OpenVPN) and multi factor authentication (eg. Google Authenticator) for safety. I would like seperate VLANs for my home network web usage, my critical home server hardware, etc for added safety. I would like to expose specific services on my server to the public/easy access for myself via my domain name and Dynamic DNS.

Considering these requirements and my available storage space, I would like to build this in a rack system. A balance between performance, future upgradeability, cost, and power consumption (to a lesser extent, see below) is important.

Regarding power consumption and associated costs. I have a significant solar array at home and a Tesla powerwall. I expect this to significantly reduce electricity costs as I am currently not using much of the generated power. Here are the key specs of the power system:

The peak power output of my solar array is 6kW. The yearly total generated is roughly 11.5MWh which gives an average of 31kWh per day. I have a Tesla Powerwall battery with a capacity of 13.5kWh. The yearly usage is about 4MWh which is 11 kWh per day. This is currently 99% provided by the panels and battery with 1% coming from the grid. I have no other renewable energy sources.

According to my calculations, the main area it fails is if you use >900W permanently and the issue is during the night since the Powerwall can’t last all night during winter at that power draw. So we should treat 900W as the max allowed continuous power consumption. If I do require more than this I can use activate systems to adaptively control the resource consumption of the VMs to balance the load. This can change through the day and year.

Summary:

As you can see this proposed build is significantly more expensive than my preferred budget. I would be very grateful to get any feedback/advice on this. Do you see major problems with these component choices or do you suggest alternatives? Can you see ways that I could get similar performance/capabilities but with a significantly reduced cost (it is quite terrible currently!). Or is this pricing just about right for this level of capability?

Thanks for your time and insights!

1 Like

Look into Pi-KVM or TinyPilot, then you can use a cheap laptop from anywhere in the house to debug.

Proxmox natively uses the Spice protocol for Console into VMs. So experiment with that (using remmina or virt-viewer), and native RDP/VNC. For games there is also Steam Play - which I’ve used decently in the past.

Consider if Tailscale will work for you. The forced NAT is annoying, but besides that it’s got many handy features. Wireguard (Which Tailscale uses) can often be faster than OpenVPN.

Tailscale can also expose services from your home network via a remote machine to the internet, and it can even do LetsEncrypt certs for you. Very handy, though a lot of lock in and vertical integration. ( Tailscale Funnel).

I’m trying to decide if there is a better way to handle your storage. ZFS RAIDZ does have expansion now, but you lose a bit of space efficiency when you grow it - though far less than you lose from 5 drive RAIDZ2s.

If you don’t need performance maybe consider if BTRFS “RAID1” will work for you. It’s better called two-copy, and it will store two copies across multiple drives. So you just add new drives as you need them. However performance wise you only get reads/writes of one drive.

MergerFS + Snapraid is another choice. Also limits you to performance of one drive in most cases.

That rack seems a little pricey, if you have a friend with a trailer you could try buying a second hand rack. Depending on the location, they can be as low as $50 if you find the right local listings.

You can get a 4 post 25U rack for between $200-300 new, that would be a lot cheaper for just the loss of 2U space. Or buy one used like cowphrase said for even more savings. I just upgraded my rack to a 42U and it was only $325 new for a 1300lbs rated one.

I would suggest buying some used servers on Ebay to save a ton of money. They wont perform as well as the newest stuff, but you can get a lot for the money you spend and it is a better place to start.

Thanks to you both for your great advice, this is very helpful!

This does sound better, I will go with this advice.

I will try this!

I have not considered Tailscale, I will research this further thanks!

I have not considered BTRFS or MergerFS so will look into these also. Are you familiar with the pros and cons compared to my ZFS approach?

Yeah, I agree they are too expensive! The problem I have is that the super cheap ones are 42U which is too tall for my room. A lot of the other ones don’t have the 1000 mm depth either which I want to have for freedom of future upgrades. I have been looking second hand but no such luck yet…

I would like to follow this path for easy upgrades in future, but for now I like the general idea of my build. Will consider this further though thanks.

16Gb RAM is totally insufficient at least for the computation server. I would do additional research into this. IFIRC just the Windows VM alone will require at least 8GB all on its own.

I am getting 16 GB per stick. My original plan was 16 x 8 = 128 GB RAM. I hope to expand this in future as RAM hopefully gets cheaper.

Oh I see now, didn’t notice the quantity column.