I’m new to the world of home lab servers, and I need some guidance. Im working on building my homelab and need a virtualization server that I also want to be able to use as a NAS. I’ve been eyeing a used Dell R730/R730xd with 2x E5-2699v4 CPUs and 128/256GB DDR4 RAM, but the more I research, the more uncertain I become.
Now, I’m considering alternative options like building a consumer platform using an AM4 CPU, using a 12900K I already have with a supermicro server board, exploring used AMD Epyc processors like 2x 32 core 7601’s with a supermicro H11 board and 256GB of RAM since there are tons of bundles on ebay for those rn. Or a dula xeon scalable build for around the same budget if a server board can be found for it easily enough.
Ive now changed my mind of the Dell R730/R730xd and switched to a R740 8 LFF 2x Xeon Gold 6152 256GB, the Intel 2x 10Gb SFP+ & 2x 1Gb RJ45 Daughter card, H730P w/Battery, no drives, no caddies & not sure on which included Riser card will be a part of it. My budget is around $1700 not including the drives or caddies. I want to ensure I’m making the right choice without overspending or missing out on a better value for that amount of money. I’d appreciate any advice or recommendations from those more experienced in this area.
I tend to stay away from enterprise gear, personally. The reason is that most enterprise gear is made to be power efficient under load, and a home lab is idling at least 85% of the time. While an EPYC or Xeon can easily crunch 32 core workloads using only 200-300W, they will use 100W+ per CPU when not in use.
When talking 24/7 operations, every watt counts. 1W is 0.72 kWh per month and 8.64 kWh per year. Even if you have a relatively low power bill of ~$0.15 per kWh, that is still $1.3 per year you are paying for every extra watt.
So, the first thing I would do is try to separate what your servers are supposed to do. Ideally, you would have two; one that is an always on, but low power and that runs things that are always on, like NAS fileserver or web server front. The other one is beefy but is only started when required. This kind of setup can save you a lot of money in power bills.
If I were to build this in a modern fashion with a single server I would probably go with something like this:
Now, this is quite expensive and I only want you to look at this build as an example build, some parts can be gotten quite a bit cheaper if you care to dig around. I would say it is possible to shave off $500-$600 in parts here, total.
Unfortunately that would not be enough to cover the cost of another machine, so going all out is the better option here. With this setup you will have a 12 core 24 thread setup that will be plenty powerful and will not pull a ton of electricity (average probably between 50W-70W). Compared to a dual core setup that is going to save you at least around $130-$260 a year in power savings if you pay $0.15 per kWh.
Hope that helped some, feel free to reevaluate and get back. If you plan to only run your homelab on the weekends, then that Dell R730 looks good.
Ditto on wertigon’s post. I also like the idea of splitting “homeprod” and homelab in separate machines. But unlike production scenarios, a home environment is completely inverted. You run your lab on a beefy machine (mine’s a TR 1950x) that you only power-on on-demand and your “homeprod” on the lowest power consumption devices you can (using a combination of thinkpenguin 4 bay NAS as a hypervisor, pine64 rockpro64 NAS and odroid hc4 backup server).
I’m not a big fan of old enterprise gear anymore (because of how much power they use), but if I were to get one, I’d only use them like an on-demand lab (like my old threadripper, which is still more than plenty powerful).
This is the only requirement you posted, but no workload was specified. To give you a better recommendation, I suggest you give us an example of what are you planning to run.
For example, if all you want is a jellyfin server and a NAS for storage, you can just get something like an Odroid H3 with a type-1 case. If you want to run an entire k8s lab with a virtual ceph or longhorn backend, you’d need something like a Ryzen 7700/X (I prefer the non-X for the lower TDP, if you’re going to run them in a non-AC’ed or not-always AC’ed room, but depending on your requirements, you might want the higher frequencies boosts).
If you want to run services and not VMs, using a low-powered device is recommended. If you particularly need VMs (especially non-linux VMs), you’ll want something a bit more beefy, but not too power-hungry. Say you want to run nextcloud, gitea, homeassistant, jellyfin, a file share (idk, SMB or NFS) and a homepage (like homer). Those can all easily run as containers on lower-end SBCs and work decently well without breaking the bank and without using too much power.
But if you want to run these as separate VMs, or if you plan to add something like a surveillance ingestion VM or a minecraft server, you’ll definitely want something beefier.
Those were just examples, but do let us know what you’re planning to run and if you have plans for expansion.
I would recommend that you’d have a look at ASUS ProArt X670E-CREATOR WIFI as it has much better connectivity (more PCIe lanes) and more functionality out of the box. It does however lack IPMI but that’s a very minor drawback in general.
Micron’s ECC modules are quite a bit cheaper
I would also recommend avoiding Samsung SSDs due to firmware bugs/limitations, the Crucial P5 Plus or T500 are good alternatives though-
I like the Silverstone rack mount chassis series and have one myself. Only get one, though, if you plan on filling those bays with drives. At that point, the power consumption goes way up.
There are cheaper and arguable better suited rack chassis’ if you don’t require hot-swap hdd bays.
Oh yeah, I forgot how much HDDs do draw. I retired my 4x1TB redundant NAS two years ago after seven years of faithful service in favor of two 1TB SSD JBODs + Backups and haven’t looked back since, though I am thinking of upgrading to a mirrored 4TB all NVMe setup. That’s me though.
Also food for thought, if the OP or anyone else want a less jank setup but with the drawback of a less capable CPU there is also the Asustor Flashstor 6 for $449, it is less flexible but draws between 13W-24W when all SSDs are installed. With 4TB SSDs going at $200, that is slightly less than $1700 for 6x4TB of raw storage, and the same money will buy you a 6x8TB in a year or two. Not a terrible deal if we are looking at the $1500- $2000 range, though yes, you can buy 4x12TB HDDs + NAS unit for the same price.
You can use practically anything in your homeland/homeprod. My lab is just some old ivy bridge laptops with proxmox installed on them, a Nas and some rpis. Also runs my prod stuff.
I worry about the proprietary parts like power supplies, connectors, etc, and especially any software licenses you need on an old enterprise server, unless you never need to update firmware - i dont know if these Dells have that issue.
depends on what you want to to at it. For me i love to look at cpu onboard computers. I now have a n100. doing some server things, and a old amd athlon apu. really bad one, but it got 4 core’s. The more hardisks you put in it, the more power you are going to need. 4 old hard rust disks. will use more power then that n100.
the enterprise servers are made to make all the noise they want as long they stay cool, Everybody cool
But i hear it needs to do multiple things, and also virtualization so you want pci expres lanes. i would seperate it. build an always on nass. power efficient, And get some old enterprise hardware to run your virtualization on. when you are done with it you can turn it of. Then the power consumption isn’t that big of a deal. And you can also look at the 4procesor machines
General use case is mainly to learn different skills necessary for a career in cyberscurity, most likely a SOC Analyst. So reason for investing in the homelab besides for fun/enjoyment, is to add as much hands on experience to my resume as quickly as possible by the end of this year.
More specific use cases scenarios are:
Experimenting with each virtualization software
Setting up multiple virtual lab environments for things such as active directory, and vulnerable systems to practice penetration testing, vulnerability assessments, and studying different exploits.
Deploying a honeypot to attract & analyze malicious activity
Testing against firewalls and IDS/IPS software and Incident response simulation
Web application environment for vulnerability testing
Deploying & learning SIEM’s like Elk Stack and Splunk, and a separate SIEM for my home network (If i dont just use the other half of my CWWK N305 Proxmox opnsense router/firewall/wireguard system)
So mainly it will be used for setting up several different lab environments. Id want to make sure I have storage for backups of every environment since its almost certain I’m gonna break them multiple times. Also would like a NAS to save from having to pay more for a separate NAS on top of a server. Then in the future dedicating part of the server for use with just personal services and what not.
So my issue with getting something newer like AM4 and most likely not AM5 unless I went with that Ryzen 770 you mentioned. Is that I don’t think it would be enough to run more than like maybe 2 decent sized virtual environments for the use cases I listed in my reply below. Right?
Id rather have something faster than the 2.1Ghz Gold 6152 But don’t know what to look for that would give me more cores and enough ddr4 dimm slots. At least without paying thousands it seems like to get enough cores and have access to enough RAM to run like 5 or 6 different virtual environments plus some security services for my home network. DDR5 is too expensive still, it would end up being being too much for me to afford unless I only had like 64GB which Isn’t gonna be enough for more resource intensive tasks like dabbling ion reverse engineering and Machine learning.
If the machine runs mostly idle i would not go for amd unless it is the 5700G. the other cpu’s just use a lot more power.
Something like a 12th gen intel 12400 would idle very low depending on the hardware, (between 4 and 20W it really depends a lot on the motherboard)
I have a proxmox vm server running with the slightly faster 13400 and about 30 docker containers for all my home services. max cpu utilization of the last month was 25% and it usually sits at 2%.
If energy costs are a non-issue for you, then second hand server hardware would serve you well, you can run multiple computers and spread things out over tons of cores.
It all has to do with what load you put on any specific node. At work we run a legacy bare metal hypervisor solution with a dual core Xeon, basically splitting the machine in two. Runs just like a charm despite being a single core for each application.
Server software isn’t like games, where you have a minimum requirement. Server software demand scales with the load you put on it, but the minimum requirements are often very modest. A Raspberry Pi 2B can easily handle simple web services and serving something like a Wordpress blog or HTMX driven site. Does this mean I would put a Raspberry Pi as a frontend computer for Google, Youtube or Amazon? No, that would melt the poor thing.
For a low traffic site like a homelab, I’d say a Raspberry Pi is an advantage even, because it allows you to easily overload the poor server and thus stress testing what to do with a server getting beaten to death.
Even the Ryzen R9 7900 is more capable than you might think.
To clarify, the DDR5 kits I linked were specifically ECC and yes, DDR5 ECC RAM is still expensive, especially higher capacity RAM. Here is one non-ECC kit that costs $270 for 96 GB, however. $2.5-$3.0 per GB RAM is not too shabby, although it could be better.
As for how much you need, honestly for a virtualized linux environment 2GB is plenty and 4GB is almost overkill for most tasks. You are not going to be running these with a graphical environment in any case.
At the end of the day, sit down, think this through. There is a lot of ground to cover and no need to rush things. Consider what you would like to have as a “production” zone (24/7 services) and a “lab” zone. Try to keep them entirely separate if possible, hacker stuff can bleed over and leave unintentional side effects. Maybe a map of the network, inkluding virtual nodes, could be a start?
This can be done mostly in a VM. I’ve ran proxmox VMs inside virt-manager to test stuff. I also ran an opennebula cluster inside virt-manager. You can do the same with vmware, xcp-ng and hyper-v. Won’t be the fastest, but would get the job done.
Depending on your definition of virtual lab, you might need something really beefy. If you want an entire hypervisor in a VM that will run other VMs and try to pentest / exploit the hypervisor, it’s going to need quite some muscle.
If you only mean 5 VMs in a group for one project (in their own vlan), 4 VMs in another and so on, then you don’t need that much oomph. A Ryzen 5900x should do. 5950x would be better. Or get an older server platform. I assume you’ll only power it when you’re working on it (for the most part).
If you don’t want your bills to skyrocket, you’ll want to do it only during the weekends, or have a dedicated low-powered PC doing that. Might want to put this on the side for now. Maybe at most, make yourself a router and run an IDS/IPS and run it as your main home router.
Doesn’t change much from point 2 in terms of hardware.
Ditto.
Ok, so you already have something. Use this for point 3 and put everything else on a single beefy hypervisor box.
That’s going to be a tough call. I’d suggest you leverage snapshots and templates instead. Better to automate things a bit, like one script to revert everything to your preferred configurations. You could do a backup location on the same hypervisor and automate restores (maybe you can leverage proxmox as your main hypervisor and use a proxmox backup server as a VM), but snapshtos are a way easier way to revert states to the point right before messing up your VMs.
It’s weird that I say that, because normally I would highly encourage you to set up a backup server for your whole home, but given that this is an actual homelab, backups of data that can always be recreated is kinda wasteful. You can just set up a small gitea instance and document the steps to deploy something and make scripts to redeploy whatever breaks and it’ll be more efficient than doing backups of the environment.
Maybe if you have some DBs, you might want to do some dumps here and there, but unless you plan to mess with ransomware, then you can even keep the dump files on the same VM (for easier and faster recovery). Just don’t do snapshots on a live DB (shutdown the DB, then do the snapshot, then start the DB back up).
That’s where the homelab becomes homeprod and I can’t recommend in good faith that you use the same box, unless you don’t mind paying tons for power (if you go for the old enterprise gear).
+1
For many labs powered all at once, I’d say that an 8 core barely cuts it. A 12 core would probably do. But they don’t need to be really fast cores. We’ve got away with 40+ VMs on way lower hardware (dual 4 core xeons), although we weren’t giving them much to work with (2-4GB of RAM and 2 cores each). You might want more juice to keep you from getting bored, but in a pinch, even something like a 2 core 2GB of RAM VM will do.
This might be a good platform for an older threadripper. If you can buy one for cheap, like a 2920x, or even better if you can get a 24 core 3960x. But don’t let the muscle fool you. All this would allow you to do is power on everything at once. You don’t need to do that.
Even myself on my TR1950x I don’t have to power everything at once. Even when I do it’s still mostly idle (lmao, well, I don’t have a lot of VMs, only about 16, of which one is my beefy windows vm, one is a proxmox vm and 8 of which are my opennebula lab with ceph - I can power everything and it’ll still take them all). If you can power one group at a time, that should allow you to get away with way less power, like a 6 or 8 core and only 64GB of RAM (although 96 wouldn’t be too bad).
Indeed. Just don’t do it at the same time when you’re working on another lab. Besides, you’ll want to focus on one thing at a time, so having a less powerful system (saving some money and power) might be a good way to learn how to juggle resources and force you to keep your focus on one thing.
Both the dual E5-2699v4 and the dual xeon gold 6152 and 256gb of ram you mentioned would be plenty to power everything on, at the cost of electricity. But that means some of your resources will be wasted processing virtual hardware that is not being utilized (when you’re working on a different group of VMs).
However, there’s one thing that jumps to me.
If you already have it and it’s sitting doing nothing, then use this. Save a buck. Even if you don’t use the 8 efficiency cores, the 8 perf cores should be enough to power one or two labs at a time.
If it’s already being used for something (like in your main rig), then it’s probably best you go with something like a 5000 or 7000 series ryzen (if you can get 96 or 128GB of RAM), of with an older threadripper if you can get them for cheap.
Now it’s up to you to choose what you want - power everything up at once, or power things in groups.
Agree 100% with this, should have read more carefully. If you already have a spare 12900k, a W680 board is indeed a solid investment for a homelab, and I also think you can do DDR4 memory with that.
+1’ing here. IME, core count, RAM, and SSD rather than HDD are the biggest factors for most of what you’ve outlined - I can’t speak to the machine learning side. IME was a past life with a lab on a x79 system running 4 major versions of SharePoint and all of the adjacent infrastructure on a single spindle. Swap thrash is hell, even on SSDs, turn things off and on in groups to minimize that. And agree on either automating the environment stand up, or at minimum setting it up once, shutting them down, then snapshotting them once or making a template out of them before you do any testing. Labs should be disposable environments - roll them back or stand them up fresh when you’re done to prep the next test.
Hardware wise, use what you have (12900k) or go cheap as possible (x99 / 2011-3 workstations are cheap and probably take the DDR4 ECC ram that’s starting to get offloaded - check the manual before buying), but for your use case I wouldn’t commingle homelab and homeprod at all. Lab will be power thirsty, homeprod shouldn’t be.
A common perception is that CPU makes up a significant portion of idle power consumption. In reality, that’s chipset and peripherals (IO, PCIe lanes and cards, IPMI, storage, networking, etc.). I offer the following youtube for more details.
As a result choice of AMD vs. Intel (at least when looking at the most recent couple of generations) should be largely irrelevant.
Used enterprise gear typically offers significantly more IO and other capabilities than consumer-ware and, as a result, needs to be expected to draw significant power idle (yes, I am aware of exceptions, but making the point to drive awareness).
Partly because of the chiplet design, amd cpu’s use more power. That is why i mentioned the 5700g, because that is a monolithic chip and from power consumption numbers uses less power. Second reason is that the chipset is not as efficient.
I don’t have that many tests that have an optimized motherboard, but from the ones i’ve seen intel has come on top for idle power. But, ofcourse if electricity cost is not a concern, i would definitely go for an amd system.
Agreed. But it’s the choice of and quality of mb components that enable this. Hence, me going for an Intel N100 system as a choice for always-on.
My 5900X system on an ASUS Pro WS 570X mobo draws > 60W idle, with multiple storage devices, NVidia RTX3080, and two 4k monitors attached, this system draws >200W while doing little to nothing.
The same CPU on a ASUS TUF Gaming B550M Plus mobo draws 25W idle. The difference here being bigger chipset, more “workstation” like features on the first mb.
Putting a 5700G into the TUF Gaming mobo didn’t change the idle power consumption (but added an iGPU that removes the need for a power hungry GPU).
So, no contest to your point just adding data points.