Home Lab - newbie looking for advice!

Hello! I’m building my first server/home-lab

2TB ram
200TB storage - can be slow
Fastest 32core (64 thread) CPU we can get (willing to give up some GHZ depending on price)

I’ve only had experience building gaming PCs before so I’ve been watching YouTube videos+reading this forum to try to learn exactly what all the components of a server are

Also, it seems multiple CPUs are necessary to have 2TB of ram? If so, how do the multiple CPUs work together (4x 8-core CPUs with 3ghz = 1x 32-core CPU with 3ghz?)

Any advice or recommendations greatly appreciated as its a lot to wrap my head around! <3

Do you want a rackmount chassis? Or tower?

If you can find one with 10-12 hard drive slots (some full towers have this, and quite a few rackmount) then you can buy refurbished 20TB Exos drives and get to your storage requirements. Use a 9305-24i HBA to connect all the drives in the tower. Or buy a used drive shelf, or build your own. This is a rack that has nothing in it but hard drive slots. You connect an expander in it, you can get a used one on Ebay for $40 or less, and use a 9300-8e HBA instead to connect to the drives.

The RAM requirement is going to be the biggest hurdle. You have to go with a Threadripper Pro, Xeon, or Epyc CPU for that. You will want to buy 256GB RAM sticks most likely, and these cost a ton of money. A single one of these CPUs can support 2TB RAM on the newer generations.
Looks like most 256GB sticks cost around $3000 each:


If you want the fastest GHz speed that supports 2TB RAM you will likely be looking at Threadripper Pro 32 core. It is also overclockable if you care about that for even more core MHz. The Pro 32-core has a 4GHz base speed, which is a lot more than most server CPUs, and a stock boost clock speed for low core usage up to 5.3GHz.

This would be the board I would go with on of those CPUs:

Heya thanks for the reply Enigma

edit- I’d prefer rackmount (I think it means I can expand later if needed easier?)

Ok great I found the 20tb exos drives but wasn’t sure how you connect them so thank you helping me understand

I’ve been looking at some dell poweredge models that seem to cover the specs but I’m not really certain what parts I need to buy with it to make it all work. Do you have any advice on those or if they’re even a reasonable option for my project?

There is additional latency and bandwith limitations you hit once you go between sockets, which is what you are asking about, and some software cares a lot about that, others not so much. I would look into NUMA (which stands for non uniform memory access) and socket interconnects (on intel it was QPI, and is now UPI). For example if memory is directly connected to CPU 1, accessing it from CPU 2 will be higher latency, and will have to go between the socket to socket links between the CPUs (which may constrain bandwith).

I don’t really have much advice for you in terms of hardware as it is way out of my price range. I could maybe give advice on 2 TB of DDR4 RAM, with older CPU, but since you care a lot about CPU speed as well, you would have to look into a newer platform like Enigma suggested above (newer Xeon/TR Pro/Epyc).

Of all your requirements 200 TB of storage is the easiest, and I would look into last as any system that meets your other needs should easily be able to accommodate the one HBA card, and maybe a few SAS expanders to hook up a bunch of hard drives.

1 Like

Sounds like a lot of money for a first server - what kind of software are you planning on running? Proxmox and some VMs inside?

Re how to connect a gazillion sata drives - basically you buy a controller and some fancy cables. A popular controller is an LSI-9201-8i, or similar which has two SFF-8087 connectors capable of going via break out cables to 4 drives each (4 lanes, times 2 = 8 drives connectable internally, hence 8i in the name).

Then, there’s “expander boards” which let you connect a gazillion SATA drives to a single card - basically you get a 36 port expanded board, which is a 36 lane card, use 4 lanes as input and 32 lanes to hookup 32 drives, so for about 250 bucks for 1 controller and 2 expanders you’ve space to hookup 64 sata drives.

You can even daisy chain expander boards.

The problem with lots of drives is the power supply, drives spinning up and doing lots of seeks use a lot of power, which is where disk shelves come in, but for 200T of space, that’s basically 10-12 drives, you don’t need to worry about disk shelves.

Thank you mate, your advice on the CPU is perfect. I found a Dell PowerEdge R920 on eBay. Any chance you’d please take a look what it comes with and advise what else I might need to buy to get it up and running? It comes with:

CPU x4 E7 15core 2.5ghz
4x 1100w power supply
2TB ram DDR3
Dell Broadcom BCM5720 NIC - not exactly sure what this does
Dell PERC H730P 12Gbps SAS PCI-E RAID Controller Adapter 2GB NV Cache - Can I just stick a bunch of 20tb Seagate HDDs into that?
2 x DELL POWEREDGE SERVER SSD NVMe PCIe EXTENDER CARD - P31H2 - not sure what that is?
iDrac Enterprise - watched a 15minute youtube video and still not a clue what that is :rofl:

Also, will this be able to boot in windows and plug a monitor+mouse+keyboard in? Sorry for my ignorance :pray:

Because of the fact that your asking about this, I am not sure if you have the budget to hit your requirements because it does not for reasons I will go into. Assuming your getting a reasonable price on the R920, a system that actually hits your requirement will probably be an order of magnitude more expensive.

This CPU is very old, and thus does not support AVX2 which means it is x86-64-v2, and modern CPUs are x86-64-v4 ( for specific information about what requirements are for each tier x86-64 - Wikipedia ) . I personally would not buy anything that is atleast v3 for now.

Also here is a very basic speed comparison of this Quad CPU vs a modern 32 core Intel Xeon E7-4880 v2 @ 2.50GHz vs AMD Ryzen Threadripper PRO 7975WX [cpubenchmark.net] by PassMark Software

In this case I think you are fine, but I would be careful. I bought a HP server that came with power supplies that only take 230V. I am working around that, but it is a hassle.

DDR3 is old, and thus will limit you a LOT in terms of bandwith. Yes it is a lot of RAM but in terms of raw bandwith and even though it has more channels it will most likely be less bandwith than a DDR5 desktop, let alone a modern DDR5 server.

This is a network card. It is only gigabit.

Yes, but it is designed to be used as Hardware RAID, I do not like or reccomend hardware raid, see the L1 video about software vs hardware RAID. You can use it in HBA mode where it will not do RAID, but reading online I am seeing mixed results about it in passthrough/HBA mode.

Sorry, this I don’t know either off the top of my head.

This is for remote management of the server.

I do think people could help you a lot more if you gave us more details of how you came up with your requirements, and what you plan to do with this server.

1 Like

Chiming in just to agree that this part is going to be essential to get good advice :slightly_smiling_face:

Thanks so much for going through that, budget isn’t a limitation but obviously prefer to spend less, and speed is less important than the total ram + storage. Thanks for the advice on the CPU & breaking down the parts <3

The L1 video on hardware RAID was so good too :smiley:

My problem with EniGmAa’s suggestion is that getting a mobo with only 8 dimm slots means I have to buy 8x 256gb ram sticks for like £20k and it seems like there’s much much cheaper routes if I sacrifice on the CPU GHz which I’m fine with

I see, it’s for personal at home use working with a software creating millions of files which I will database and only use myself. Ideally I can boot windows on the server and have it work just like a super powerful desktop computer - please let me know if I’m misunderstanding anything, I feel so clueless :face_with_open_eyes_and_hand_over_mouth:

I’m just glad you appreciated it, and even took the time to watch the L1 video I reccomended.

That sounds so nice, I wish I could say the same, but I am VERY budget constraint and was forced to learn to work around that, and also just lower my expectations (which I did a lot).

Thank you for clarifying that, your original post did ask for the “Fastest 32core (64 thread) CPU we can get” which I tried to account for in my earlier advice.

You are probably right about that ( I say it this way because all complicated and nuanced questions are almost always best answered with “it depends”). Also given that you used £, you almost certainly are in a different region than I am familiar with so my used market knowledge will not be fully applicable to you.

I personally recently bought a server ( HP DL580 G9 ) that when maxed out can handle 12 TB of DDR4 RAM, is Xeon v3/v4 which means it supports AVX2, and the config I bought was “only” 1 TB of RAM but I was able to get it and a LOT of accessories for under $1k including shipping (shipping was honestly very expensive but very worth it). I still haven’t had the time to rack it or deal with its quirky quad 230V ONLY PSU’s. I’m only bringing up my personal experience because it seems like you might be happy with something similar, and it might be a reasonable balance of performance and price.

I would still advice you to look deeply into how NUMA and core to core latency will affect your workloads. You also have to judge yourself if you can get by with the R920. We all draw the line between e-waste and useful hardware differently, and I know I would not buy the R920 because it is what I consider e-waste, but then again I would desperately try to make it useful if it was literally free, or already in my possession, but I don’t think I would find a use for it given I am not that desperate for hardware, and the only redeeming factor of it (the 2TB of RAM) in my eyes, has the critical flaw of being so slow.

I’ll try looking into databases and how they deal with NUMA and core-to-core latency, if I do learn anything useful, I’ll post it here.

I was hoping someone else would chime in on the software aspect, but yes (although you probably are better off with Windows Server).

Some examples of VERY exotic server workloads on windows

from : 896 Xeon Cores in One PC: Microsoft’s New x86 DataCenter Class Machines Running Windows


they also did it again

I am not recommending or endorsing Windows for server workloads ( I am just saying this because I have been dragged into unnecessarily heated arguments about this, and I do enjoy reading and even discussing the differences between the kernels, but personally I appreciate and make use of all three BSD, Linux, and Windows OS’s, and I don’t see that changing anytime soon)

Everyone starts out clueless, so there is no shame in that, honestly I’m not an expert either. I’m glad your taking the time to learn though, and am happy to help.

I’ll look into NUMA now :smiley: and into this storage stuff, thanks again

I got two quotes today from UK businesses, one for a new HP Apollo Gen10 for £6800, the other for a refurbished Power Edge R740XD at £7800, sadly I can’t link but ill put the specs for each build and would love to know which you think is better <3

Brand: Dell
Model: Power Edge R740XD
Form Factor: Rack Mountable 2U , 3.5" 12 BAY + 4 x 3.5" Internal Bay + 4 x 2.5" Flex Bay
Processor: 2 x Intel Xeon Gold 6148 | 2.4 GHz - 3.70 GHz | 40 Threads | 20 Cores | 150W TDP
Memory: 24 x 64GB DDR4-2666 DIMM
Storage: 12 x 16TB SAS 12G HDD
Power: 2 x Dell Platinum 1100W Power Supply
Raid: Perc H740P Mini Raid Controller
Ports: 2 x 1GBe NIC | 2 x 10Gbe SFP
Management: iDrac 9 Enterprise
Bezel: Included
Rails: Included
1 Year Return To Base Warranty

HP Apollo 4200 G10 Configured

HP|4200 G10|2U|12LFF+12LFF-MID+4LFF-REAR||A|
Xeon|Gold 5218|2.30GHz|22M|16C|125W|SRF8T x2
HP|Apollo 4200 G10|Heatsink w/ CPU Guide x2
128GB|PC4-2666V-LR|2S4RX4|DDR4-21333LR x16
HP|P408i-a SR LH|2GB|FlexSA|INT|G10|INT Ctrl
HP|DLxxxG9,MLxxxG9|Smart Array FBWC Battery|6"
ST|8000GB|LFF|SAS|NHS|7.2K|12G|HDD x13
HP|LFF|Standard Carrier (ST)HS Caddy x28
HP|Flex Slot|800W ‘Platinum’|HS PSU x2
HP|DL2000, SL2500|Rail Kit

Appreciate any advice again!

I actually found these articles (they are older but at a glance the concepts covered still feel relevant) , haven’t read them myself yet but they seem interesting, and so I plan to.

My first time seeing this site, but it does seem full of stuff I’d enjoy reading.

Anything specific you want to know about? There are quite a few options for how to handle storage (between all of the flavors of hardware and software management, which was definitely touched on in the hardware vs software RAID ) , some of the more important metrics for people are bandwith, latency, IOPS (all three of these things are very connected), redundancy, capacity (these two are also very connected as the less redundancy you have the more effective capacity you have and they do also in a less direct way relate to the prior trio as managing redundancy comes at a penalty to performance).

Haven’t looked into either of those servers in my own search, so even though I could do a line by line comparison of them, I feel like that alone might be a bit misleading as the details do matter ( such as I found out when looking up the RAID card in the R920 and finding mixed feedback from people trying to use it as an HBA card, and see this Lenovo is Using AMD PSB to Vendor Lock AMD CPUs - ServeTheHome ). I know I read everything I could find about the DL580 G9 before buying it ( had no idea that HP SmartMemory even existed, learned about the firmware, and the reliability/maintainability of the components, learned about how thermals/noise management is on them, how difficult it would be for me to buy parts or repair it etc.).

You are in a much better situation as atleast one of them has a 1 Year warranty ( what I was sold was AS IS no return/no warranty), but it is still a hassle to deal with, and IMO better to be sure of the product before you buy it.

1 Like