First Home Server

TLDR; First home NAS build. A few hardware choices and some TrueNAS assumptions

Hey everyone!

I’ve been lurking on the forum for a while now and have seen my fair share of videos on TrueNAS, Unraid, servers, motherboards, processors, e-waste, you name it. At some point, I got so over-excited I was planning to build a whole data center in my guestroom, with multiple NAS and hypervisors, etc., Using a mix of Threadripper and EPYC processors. This was inspired mostly by binge watching tons of L1T videos. Then I realized I didn’t have the money to buy them, space to place them, or monthly income for the electricity bills. So now that I calmed down and got back to earth, I’m planning my home server build. I want to start small and grow it as needed.

I’m practically new to this, but I worked with Citrix and VMWare in the past, fighting (googling) my way through it. So I can’t tell for sure what I will be using it for, as I might keep adding things as I learn the extent of what I can do with it. I will start with TrueNAS Scale, as this will be a storage server first, but I will definitely use a few VMS. I like the idea of setting up cloud storage with Nextcloud, and I want to try those other services in the Home Server Series. I will probably spin up a VM for work instead of using the laptop and put my stipend into improving my server XD. A Linux for learning purposes. And if I get hooked on it, whatever is needed to help with my data career, but this will come in the future.

The primary function is NAS for all my data hoarding and photography RAWs. VMs will be to play around and learn different stuff or see the extent of what can be done with TrueNAS. The only real VM I might have is that work-from-home Windows.

Planning it while considering potential future expansions is the only thing holding me back. Any advice will be very welcome.

Case: I have a Meshify XL with plenty of room for expansion. You don’t need me to tell you.

CPU: AM5 7600. 2 doubts here:

  1. Considering I may have a handful of VMs running, will a 7600 be enough, or would it be better to get a few more cores, 7700 or 7900?
  2. I was thinking of getting an AM5 X processor and using it at a lower wattage but having the option to increase it if needed.

Motherboard: I checked almost every x670e and b650e I could find. Sucks not to have all the TR lanes, but whatever.

MSI MPG X670E Carbon seems to have the most balanced features for a home server (for me): 6 Satas, 4 NVMes, 8x/8x/4x, 2.5G Lan, 2x USBc.

As far as I can tell, it doesn’t share any lanes between PCIes, NVMes, and Satas, which pulled me away from the B550 Proart.

I like that with 4 NVMes and 6 extra Satas, I can run up to 5 SSD mirrors for the special VDevs if I ever need them. (well 4 mirrors and a single one if I consider the mounting locations in the case)

Drives:

HDDs

  • WD Red and IronWolf are easier for me to get in all their tiers and obviously cheaper.

  • MG08 (Cant find them here)

  • Exos (I may be able to get the E ones, ~50% more cost per TB). Are they that much worth it?

  • WD Gold any good?

SSDs:

  • IronWolf 125 and 525 vs. WD Red SA500/SN750

  • Seagate has ~2x the TBW for a +50% price. Samsung is a close second to IronWolf and its similar price.

At this point, I’m thinking of going with IronWolf on any type. Maybe Exos for HDD

Pools:

  • I want to start with an 8-drive VDev (Still unsure if Z2 or Z3), but I don’t have enough money to buy those many drives at the moment. I was thinking maybe I could buy 3, so I can create a mirror and have a spare. I then save $$$ till I have enough to buy 8 drives. Once I have them, I make the RaidZ, and here is my doubt: Can I “decommission” my mirror/s so everything moves on to the RaidZ? I believe this is possible with “Export/disconnect pool,” if I understood correctly. I will have 3 spares lying around, which could start my second 8-drive VDev if necessary.

  • I will run a secondary pool with mirrored SSDs for the VMs. I’m pretty sure I need to have it as a separate pool. I don’t think I can orchestrate in which of the VDev the data goes, inside the same pool, right?

  • The host will run from a third SSD. Would you recommend a mirror here, or having a single SSD and backing up the configuration is enough?

  • Another question, I can’t find a specific yes or no answer. Is it possible/recommended that, instead of using 2 mirrors for boot and VMs, make a striped mirror with 4 drives and make 2 partitions, one for each of those services? I believe it’s a no because my logic tells me the only way would be to make the RAID beforehand and present it to TrueNAS as a single drive.

Bonus:

I have the option of buying a used X99 system: ASUS Sabertooth, i7 5820k w/ Corsair H110i GTX, 32Gb of vengeance, GTX 1070, Thermaltake S71 Case, PSU Corsair CX750. All of this is for the price of a 7700x processor in my country. Is it a good starting point? It belonged to an architecture student who didn’t even use it for rendering, just 2D CAD. He over-dimensioned it because he thought he might get into rendering eventually, but he didn’t, so it was never pushed or gamed in.

I know that system is old, but live in Argentina, and prices are over the roof, and getting newer stuff is tough. I’m still looking for ways to get the MoBo if I go with the AM5 MSI.

Let me know if I omitted anything to help scope this out.

Thanks!

2 Likes

I’d suggest starting with the X99 system. It saves you tons on money right away and in the long term (electricity bills! AM5 needs notably more power then the X99 system, especially as you won’t run it full power anyway). If you need the physical space for drives etc, consider swapping parts out to the Meshify case you want to use, then sell on the (empty) Thermaltake case to replenish your budget a little.

In any case, it’s a good idea to buy a HBA card to connect the storage drives. It makes swapping out the mainboard easier as you don’t need to find a modern board with 4+ SATA ports (which are rare, most cap out at 4xSATA) Choosing a 16x SATA port card only required 1 PCIe slot and leaves other slots available for GPU, 10Gbe network and/or an NVMe adapter card.

As for other questions: run the OS from a drive in the provided NVMe slot. A PCIe gen 3 dram-less 1TB drive is getting very cheap (below US$50 on Aliexpress) and even the 2TB models are well below 100 bucks. This would offer you ample space for the OS and VM’s, perhaps use a SATA SSD as mirror (but not RAID!) for backup.

HTH!

Hey, thanks for your input!

I went to check the system yesterday. It looks fine, I couldn’t test it because it was the dudes father and he didn’t know much about computers so there was a confusion with monitors and VGA/HDMI cables. I will go to test it next week. The only drawback is that the top of the case was a bit cannibalized, to fit a 280 rad, but that’s fine. I might keep it as is to build myself a workstation later on, I really liked the case.

HBA 16i card will come eventually, definitely want one, but I’d rather put the money somewhere else while I have some available SATAs. Like getting Exos drives instead of IronWolf.

So does this mean that performance+safe wise, running the host and VMs from a NVMe and backing up to a SSD mirror is better than having a mirror for each of them?
I mean as I’m writing it sort of makes sense as the NVMe is faster, I get copy of both in 3 drives now, and its not like I’m pairing one with an SLOG that chews through TBWs. Is this doable from within TrueNAS? And also, how would i recover from the NVMe loss in this case?

5600 was a popular choice for Zen3. 6 cores is usually plenty. ZFS doesn’t use much CPU by itself, compression being the only noticeable CPU load relevant to sizing the CPU.

You get the drives with the best price/TB. These are usually Toshiba MG or Seagate Exos. Both have 5 year warranty. Good choice for cheap capacity. Get what’s best for your market. Drives are still expensive compared to the rest of the system :wink:

AM4 and AM5 have ECO mode and you can set the TDP to 65 or 105W if you want. Simple BIOS toggle. My 5900x in my server runs at 65W, it’s like 10% less performance when all cores are at 100%.

With mirrors, you can always buy two new drives and just add them to the pool. Easy zpool add. If you go RAIDZ, expansion is limited. With 3 drives you can do a RAIDZ1 with 3 drives and buy another 3 drives and add another RAIDZ1. Double the vdevs = double the performance.

Not necessarily. I run all my VMs off HDDs. ZFS does gods work in caching data.

No need to use NVMe for the boot drive. cheap SATA SSDs will be fine. It just writes kilobytes of logs even a HDD has no trouble with. I boot my server from external SATA SSD

You are getting there. I built my homeserver two years ago and now I’m planning for a Ceph cluster with the homeserver as a node. And network has to be upgraded as well.One step at a time, hobby is expensive :slight_smile:

I find it hard to believe an X99 + CPU draws less power in idle than AM5 + CPU. Got any proof of that? Remember that we are talking about a 28nm CPU vs a 5nm CPU. That should draw around 10% of the power for the same node, or 10x stronger at the same power level. So the AM5 is probably the better call here, especially long term.

As for the NAS + virt, I give the same advice as before. Two machines, one 7600, one 7900. 7900 can be a smaller box with just Mobo in and perhaps a 280mm AIO. This allows you to not worry about NAS getting in the way of your other stuff whenever you need to resilver your ZFS, and stuff like that.

Other than that it looks pretty good! :slightly_smiling_face:

5600 was a popular choice for Zen3. 6 cores is usually plenty. ZFS doesn’t use much CPU by itself, compression being the only noticeable CPU load relevant to sizing the CPU.

You get the drives with the best price/TB. These are usually Toshiba MG or Seagate Exos. Both have 5 year warranty. Good choice for cheap capacity. Get what’s best for your market. Drives are still expensive compared to the rest of the system :wink:

Noted, ty.

AM4 and AM5 have ECO mode and you can set the TDP to 65 or 105W if you want. Simple BIOS toggle. My 5900x in my server runs at 65W, it’s like 10% less performance when all cores are at 100%.

Yep, that is where I was aiming. I’ll run them in ECO and if I never need more power I’ll just let them free. Any reason you went 5900x? Or was it just what you had laying around?

With mirrors, you can always buy two new drives and just add them to the pool. Easy zpool add. If you go RAIDZ, expansion is limited. With 3 drives you can do a RAIDZ1 with 3 drives and buy another 3 drives and add another RAIDZ1. Double the vdevs = double the performance.

Yeah, what I meant was getting mirrors now. Then get the 8 drives, add a VDev with RaidZ and remove the mirrors so now I’m left with only the RaidZ. I think I could disconnect the first mirrors to achieve that, but I was not sure.

I know that would leave me with 2 spare drives and if I ever want to increase my pool size, for best practices I will need to add 8 drives again

Not necessarily. I run all my VMs off HDDs. ZFS does gods work in caching data.

Would you recommend L2ARC and SLOG from the beginning in this case or wait and add it if I see fit?

No need to use NVMe for the boot drive. cheap SATA SSDs will be fine. It just writes kilobytes of logs even a HDD has no trouble with. I boot my server from external SATA SSD

Awesome.

You are getting there. I built my homeserver two years ago and now I’m planning for a Ceph cluster with the homeserver as a node. And network has to be upgraded as well.One step at a time, hobby is expensive :slight_smile:

I just got a cisco 2950 from my father’s company. It’s not much but it is 24 x 1gbe. I got my meshify, I will get this used pc, which leaves me an acceptable case for a further VM server build. Slowly but steady.

Thanks!

Yep, the second home server for VMs will come in time. Now this is as far as I can stretch.

7600 + 7900, that is over 1000us here just for the processors. I can probably get 2 decent x99 system for that money, and may include a GPU that’s and upgrade for my 1660ti in my gaming rig (which is running on a bulldozer hahahaha fx8350)

Ty!

That won’t be possible. You can’t remove vdevs with RAIDZ. vdev removal is the privilege of mirrors-only. You have to be creative like moving stuff elsewhere, destroying&recreating the pool and send the backup snapshot back to ZFS. That’s how the pro’s do it :lol: :+1:

I was overestimating what CPU I need. No buyers remorse as I got it for good price even by todays standards. And I will soon have use for the additional cores, so it makes some sense after all. Even at 65W, those 12 cores pack a punch.

Even if 6 cores will be too little in a couple of years…5900x/5950x then can be had for cheap at this point.

These two can be added later. No need to buy drives for them right away. I found use in L2ARC and if sync writes are bothering you, SLOG is really worth it. For a homeserver, single SLOG drive will do fine. It’s just plugging in and adding them to the pool and both can be removed at any time.

What’s your plan on memory? I recommend 64GB in 2x32GB DIMMs to allow expansion later on.

Yeah, I can understand money is tight, though not that big of a stretch considering the drives of your NAS alone will cost about as much. This is a build I suggested for a friend, max draw < 150W and idle is something like 40-50W. He’s a SFF nutter though and AM5 ITX boards, well, ain’t no bargain to be had right now. :slight_smile:

I reckon you can shave this down to somewhere around $800-$850 with a real mobo and the Prism cooler included with the 7900. Remember, more or less any motherboard will do for this, so I’d invest in something like the MSI Pro B650M-A Wifi personally for $159. Still need to find a good case and PSU though.

Not saying you should buy, just be open for possibilities. Tech has evolved and these days there really is no need whatsoever for big iron in the home. :slight_smile:

Small home virtualisation server

Type Item Price
CPU AMD Ryzen 9 7900 $379.00
CPU Cooler Noctua NH-L9A-AM5 CHROMAX.BLACK $54.95
Motherboard Gigabyte B650I AORUS ULTRA $259.99
Memory TEAMGROUP T-Force Vulcan 2x16 GB DDR5-5600 CL32 $81.99
Storage Samsung 970 Evo Plus 1 TB M.2-2280 PCIe 3.0 $54.99
Case In Win Chopin Pro $157.66
Total $988.58

Ok, I did my homework and now I get it. The moment I spin up that 8 drive RaidZ Vdev, I’m making impossible to remove Vdevs from that pool.

So that leaves me with:

  1. Make it a striped mirrors pool and keep adding drives in 2s. Not a fan of the lower level of redundancy here. I would definitely use this if this was a VM only server, for speed, and I would be creating snapshots of VMs somewhere else
  2. Could grab some spare hardware I have laying around (or use some e-waste from my Father’s office) and detach the mirror and move it there to then copy the info through the network.
  3. Add a second pool, move data, detach both and reattach second as primary. Hopefully I’ll be confident with my skills one the time comes.

ATM, the x99 system I’m checking has 4x8gb. Although when I went to check it the other day I could swear it had 6 dimms installed. :man_shrugging: . Hopefully next time I go, they have a working monitor so I can turn it on and check everything is alright.

AFAIK, this mobo (sabertooth x99) can take max 64gb of non ECC memory, in an 8x8 config. I would need to get another 4x8 set and that would be my max…

I have this board. It can take up to 128gb of non-ECC memory on a desktop chip but is going to be a bit of a lottery on a 5000 series CPU (16GB DDR4 sticks were not out when those CPUs released) , less so on the 6000 series CPUs for x99 (there’s even some overclocked kits on that board’s QVL when using those CPUs) . The RAM speed will drop significantly though on a 5000-series (i.e. plan on it running at 2133 if it runs at all).

The board does also accept xeons, ECC support may or may not be official. But should* work (Intel CPUs: Xeon E5 vs. Core i7 | Puget Systems)

Well. I do not mind big Iron if its a server thing. For desktop yeah, I do prefer them smaller ones. Unless I need several expansion cards, I dont see the point in going over ITX + NVMe

And honestly, my initial thought was as you said, 7600 and 7900, NAS and VMS

I will definitely use AM5 as my next build to move my VMs there. MATX or MITX. Couple of SSDs and snapshots to the NAS. Or maybe striped mirror with 4. Ill see…

But I also want to have something running now. I’m itching to start playing around with stuff. I’ve been reading books like Absolute OpenBSD and FreeBSD Mastery series, and I want to start playing with stuff. But I also want to get into setting up a NAS so instead of external drives + computers I do external drives + nas and free up my computers.

I did choose this platform so I can add an used Xeon in the future and unlock the other lanes. I’m trying to find a 2689v4

But I wasn’t aware using Xeons made it somewhat possible for it to support ECC mem and also increase the max mem I can have. Thanks for the tip!

Well. Update.

I got to test the x99 system.

Everything was fine until I realized it was showing only 24gb of mem in taskmgr, which seemed off. I check in wmic and I only see 3 slots with mem. I check physically and I see 4. Switch D1 with C1 (C1 was not being recognized) and how I see C1 in wmic but not D1. So I assume the dimm is fried and the slot is fine.

However, with that old of a system I feel a bit uneasy. I have no way of finding out what kill the mem. Not sure if it’s the mobo, but my lack of knowledge leads me to think the mem was plugged to the mobo so what else could’ve been other than the mobo or something spontaneous.

It’s a shame because I was sort of looking forward to it. Even with an old system it seemed decent. Has an AIO and I really dug the case. But you know, with old things, once something starts to fail you don’t know how long it will take for everything else to fail.

So I’m now thinking, instead of going with the current gen which is way expensive and hard to get here, use something from the previous gen. For a NAS server I think I will be fine, and certainly better than 8 gens back. If it was a VM Server I would reconsider current maybe, but if 2 days ago I was willing to go like 8 gens back…

PCPartPicker Part List

For drives I’ll se what I can find.

I will need a GPU though. If I run plex, only in LAN will I need something beefy? What would you recommend?

Any other things I might be missing?

Thanks!

Cheapest Intel ARC you can find and hope that you can use HW accl

1 Like

Alternatively, the Erying boards come with 11th gen Intel mobile CPU’s, thus onboard graphics, for less then the price of the Asrock mainboard in your list.

https://www.aliexpress.com/store/all-wholesale-products/1102565325.html

The i7 11800H board is also 8c/16t:
https://www.aliexpress.com/item/1005005253205275.html

As for drives, I recently got some refurbished 16TB drives from ebay (Florida-based shops) for not a lot of money. Still, getting it from there to your place might prove too expensive.

(the other offering is no longer valid, I got 2 out of 5 drives and that was 2-3 weeks ago)

HTH!

1 Like

Seems obvious given their iGPUs are great for that. Can’t believe I didn’t think of it.

Yeah, HDDs will be whatever I find cheap when I’m ready to buy them. Hopefully I’ll find affordable Exos 4tb

The 970s will be mirrored, for the host and any VMs I have running.

Bumping this with the following question:

Still haven’t moved forward I got sidetracked with work and other stuffs.

I still feel skeptical about not buying current stuff.

The recommendation to use the cheapest intel card for the plex part left me thinking:

Wouldn’t I be much better getting a LGA1700 mobo and using for example a 13500 with the 770 with 2 encoders, and not waste a slot in a GPU? That proc is still 65w. The mobo could be the SUPERMICRO MBD-X13SAE-F

Thanks in advance!

What do you think of this? I can spare some more money for this project now and I have a friend that is willing to bring this stuff back with him from US.

Main doubts:

  1. Should I aim for ECC on a VM server? That would affect mobo choice and idk what else…
  2. I got the Optanes already to install proxmox there. Is it overkill to have them as Raid1 + another R1 of ssd for the VMs? Would you use consumer SSDs? Or maybe 1 nvme for Proxmox and another high capacity nvme for vms?

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 5 7600 3.8 GHz 6-Core Processor $216.66 @ Amazon
CPU Cooler Noctua NH-L9x65 chromax.black 33.84 CFM CPU Cooler $69.90 @ Amazon
Motherboard ASRock A620I LIGHTNING WIFI Mini ITX AM5 Motherboard $139.99 @ Newegg
Memory G.Skill Ripjaws S5 48 GB (2 x 24 GB) DDR5-6000 CL40 Memory $154.99 @ Amazon
Storage Intel Optane P1600X 58 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive $33.99 @ Newegg
Storage Intel Optane P1600X 58 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive $33.99 @ Newegg
Storage Intel D3-S4520 1.92 TB 2.5" Solid State Drive $184.14 @ Amazon
Storage Intel D3-S4520 1.92 TB 2.5" Solid State Drive $184.14 @ Amazon
Total $1017.80

I think this is a good system overall. I am a bit sceptic towards the A620 board, I would opt for the $40 extra for an ASRock B650I Lightning. If going for ECC, the Asus Strix board is the only one guaranteed to support non-buffered ECC, but that is also $311, which is atrociously high just for ECC.

I am a fan of the 7900 for a small-but-powerful server that allows for extensive virtualisation tasks, but a 7600 works just as well and if you only need half the cores then yes, it’s a good option. I would pay the extra $200, not everyone have that need though.

SATA SSD drives, I am starting to go more and more negative. The interface is just getting too slow for what it needs to do for SSDs, U.2 or U.3 is the way to go here for future proofing. That said SATA is still good, just old and limits a few use cases.

A mirrored Optane boot setup is really nice if you can find one for affordable prices. :slight_smile:

For what it is worth, I think mITX is slightly too constrained if you also want storage, ideally you would want x8 m.2 slots in addition to a single 4x4 PCIe 16 slot (that could be bifurbed to x16 or x8/x8 ofc). I do not know if this is possible in the mITX space, though 4 m.2 should be if you can utilize the back side and remove some legacy headers. But that is kinda off topic for this discussion :slight_smile: