Home Server Build advice?

Hello Everyone, I am new to the Level One forum but, not to watching and reading the forum posts over the past 6+ years. A bit of background, I have been building and maintaining computers for about 35years, but my experience with Linux though limited , but nor intermediate either is as everything els a work in progress and goes back more then 20 years. I have built and played with worstations but have never completely stayed with any one distro other than RHEL- CentOS wihich i used to have a storage sever aeveral yrs ago. Now i am wanting to built an extreme server with zfs. But I am unshure as how to setup my mix of drives to maximize storage and have a good bit of redundancy also!

I am looking at building a home server fom a Enterprise second hand Dell XC730xd-14 bay rack mount. The immediate usage will be network file storage, backups of workstations and laptops; and media/file sharing+PLEX. But, I also plan on working with some VMs and experimenting quite abit. My current build list is below and I would like some suggestions/feedback along with some feedback on UnRaid vs TrueNas vs OMV.

Build List:
Case: Dell Poweredge XC730xd-14 bay; 12 LFF, 2 rear SFF 600gb sas for read and write cache.
CPU: Intel Xeon E5-2660 v4 14 core 28thrd. Times 2
Motherboard: Dell- not listing the part number
RAM: SK-HYNIX (4x 32GB) & (4x 16) DDR4 2400mhz.=320gb
PSU: Dual 1100 wat Dell Platinum
OS SSD: 128gb SATA-DOM Innodisk SL-3ME v2 with the latest firmware
Cache SSD: Dell Constellation 2.5’- 600gb SAS 10k rpm x2 in raid 1 mirror: owned

Need the help here mostly:
HDDs: 2- 14tb seagate EXOS, 2- 6tb seagate EXOS, 4- 4tb dell Enterprise SAS, 4-3tb Dell enterprise class SAS: all owned

GPU: QUADRO RTX 2000 The Quadro is from ebay and works quite well with out additional heat in the server.

Any feedback would really be appreciated. All the parts labeled (owned are either coming out of my system for an upgrade or are laying around collecting dust. That along with my growing movie library really prompted this, aside from that it is an excuse to build a server to play with at home and learn some more. And to maximize my time,efforts, and my storage value/hd $ spent and on-hand.

My other concern is that I need to make more room for expansion later next year as i will be replacing all of my work servers hdd’s and buying them for mine. When I can then used 12-12tb hdd’s with 2 drive failure in raidz2. Which is my final goal, yet need storage now, until then.

Thank you ahead for any and all suggestions, awaiting comments and critiques!

Hey there, @Bigbird58. Welcome to the forum!

Since your post is about a separate concern to the thread you posted in, I took the liberty of moving it to it’s own thread. This will help with visibility and keep the forum topics more organized. Feel free to edit the thread’s topic if you think there’s a better name for it, I just gave it a name I thought was suitable.

I’m currently in the office, but I’ll hopefully be able to provide input to the actual topic when I find some downtime.

1 Like

Welcome to Level1!

I assume that your new server would run 24/7. I also assume that, being a home server, it will be within ear-shot of living quarters - a typical home lab.

Sounds interesting and quite enterprise-class. What I associate with this is lots of noise and high power consumption. This is probably the opposite to what you’re looking for in a home setup.
A dual-CPU setup at home is quite exciting if you don’t have regular access to it at work, but this particular CPU is quite old and a 8-16 core AMD desktop CPU will offer you more compute-horsepower at much lower noise and power consumption. Yes, this may be a little more expensive for this reason, too. It also will be more future-proof.

Impressive.
Each ram stick consumes >5W of power. I am not sure what applications require > 128GB RAM, which you can easily get in a desktop setup.

Impressive.
Do the math what 500-600W (the sweet spot of power consumption to run these efficiently) would cost you in your area per month/year. Are you ok with this cost?

You sound quite proud and convinced that these drives offer great value. Honestly, I would consider ditching them in favor of some SATA, SAS or NVMe SSD. Pricing on these has never been this good.
Also… it’s quite unclear if you would even need or benefit from cache drives.

The good news is that your proposed setup has plenty of room for drives.
I would start with a RAID10 setup in zfs. It is both the most flexible configuration because it allows you to change it completely (as opposed to a RAIDz(1/2/3) setup). Also, it is the most performant.
This would yield 34TB of usable storage from all your drives in 6 vdevs. This should offer quite some read/write performance.
When you finally get your hands on the 12x 12tb drives, you should probably look at a configuration of 2 RAIDz2 vdevs of 6 drives. This would yield 96tb usable storage space.
This is not set in stone and rather dependent on a bunch of assumptions. But it will give you something to think about.

Without getting too deep into the weeds I would suggest Proxmox - then you could use ZFS if you choose to, you could pass HBAs or disks on to your VMs if you want, you can try out different OSes at will , lots of possibilities. Personally I just recently setup a new media box with Proxmox (all four of my servers have it now) replacing Ubuntu Server but since I like using Ubunut Server what I did was to create my ZFS pool on Proxmox then share it with Ubuntu Server through SMB, then have Ubuntu Server do the real work - which in my case is running Jellyfin (I got sick of Plex asking me to pay YouTube and Amazon for content that I already had on there!). Using a VirtIO NICs means really fast access for the VMs on that same box. Then, since it’s SMB anyway I just access that share for my other servers as needed so I also have it being access by Airsonic and some other stuff over 10G. It might sound slow with all those layers (I thought it did anyway) so to test I ran a VM on my other physical machine which had its disk on the media server and it runs just fine. Anyway long story short whatever you go with, my suggestion is now and will always be to start with Proxmox and build up from there, it just gives so much freedom.

thanks

First Thanks for the insight and advice. The CPU’s are 14core - 28 thread and are 105w TDP. With all of the fans @ 23% 6340rpm; 29C temp, ram, drives and CPU; I only max out at 326watts at idle / 491 at Full Load. Which come to approx. Per Month $32.53 or Per Year $395.74 and is a Business Right off. Sorry, I average 60% of the House for this due to all of my Home Office Setup.

Noise is 64db right next to it. I have it in a Rack in the Coat Closet in the Entry- turn ed into a server IT Closet. Where I cut out 2 openings for a filtered intake holding 2 Noctua 320 x 44mm fans for air intake. Then I cut in a return air vent also filtered with an Exhaust fan in the attic 8ft from the closet to hold down the noise; insulated also right above the family room. once the closet is closed nothing can be heard. I am also running an R430-8bay for Plex and a R750 all NVME 40tb for my business and customer Websites and data Storage.
All told with 3- 2500watt Sine wave UPS’s, Fiber. 2- 48port switches: the DB just outside the closed Door is only 78db. just a bit louder than a 4 person continuous conversation. I will have Sound Pads installed later this next Quarter.

The Ram is for the VM’s that I plan on Running, so that I don’t have any lag.

The SAS 600gb are temp till I can Find SSD drive adapters for replacing them with 2x 3-4 gen NVME drives that I have on hand.

so for the Raid 10 setup in zfs . How are you looking to have that set up? Making several stripped sets of vdev or lvm’s and then making them into a raid10 pool? FYI- I typically run raid 5 or 6 but with all of the same drives in EXT4. I am a newbie when it comes to zfs. Its just trying to get the most out of mixed drives that has me rattled.

Great information and I really am appreciative of the comments and suggestions. Thank you!

Again without getting into weeds myself, Are you saying that Even though i plan on using TrueNas Scale and Docker. That Proxmox 8, is a better solution and the setting up TrueNas on top after passthru. and Plex in another VM., ETC. for all other instances that I want to experiment with.
in doing this, I know that a different NVME should be used to hold all VM’s, but do create a share on a rust drive to to use as the data for all VM’s?

I just saw you’ve got those 14C/28T Intels - my main VM rig has a pair of the E5-2680V4’s in an HP Z840 and I LOVE it, might buy a second one for that matter…
So here’s the thing - whatever is running ZFS is going to want (need) direct access to the disks so that’s where it gets a bit funny running TrueNAS on Proxmox as either of them could do ZFS…if you’re deadset on trueNAS then you could still go with Proxmox and pass the HBA through to the TrueNAS VM instance and then you can still use those other cores, etc., for other VMs in the future if you think that you might want to…if youre totally positive that youll only ever want to run truenas and the apps that it has for it then putting truenas on the baremetal is probably the way to go…the reason I recommend proxmox is that it gives you the flexibility to change in the future if you decide to pretty easily as you can spool up a different VM in minutes…really easy to run full fat VMs (inside of which you can run Docker) or to run LXC containers so it just gives a lot of flexibility. the only downside of truenas on the bare metal is that you’re limited to whats in their app store more or less whereas with Proxmox you can use truenas still and docker and lots more. so short answer, if it were me id install proxmox then truenas on proxmox as a VM and pass the HBA or the disks through proxmox to truenas…as for data for the VMs yea i would have that on the ZFS store shared through SMB. you could even have the VMs themselves on the ZFS pool shared back to proxmox via SMB but putting them on NVME will definitely give you better speeds for them
Now, if it were trulye me I actualy just wouldnt use truenas…I like it and all it is great for what it is, a simple to use NAS software that runs containers…I just personally like the abiity to spool up a full version of Windows 10 next to a full version of MX Linux next to Ubuntu Server etc etc etc and I kind of feel like all those cores and threads might be put to better use with some other things…if it were me I would use proxmox and put the ZFS share directly on it and share that with SMB to al of the other VMs, then put your VMs on NVME as needed with access to that share and, if you want Docker containers such as with RunTipi or CasaOS then I would just create an Ubuntu Server instance and put them on there with access through SMB to your ZFS share which is managed by Proxmox (i have that exactly in my setup now - in fact I have Docker on four separate VM instances across three machines). You seem like you are pretty set on truenas though so I think the real question is - will you want to be able to run other full VMs on the same system or will you only run whats available in truenas? If you’re good with just wahts in truenas then go with that but if you want to be able to run any VM, LXC, Docker, and put TrueNAS on there too even then I would start iwth Proxmox.

Boy that’s long…my personal homelab has an i7-7700K with 32GB of memory runnig proxmox with a ZFS share on it and a few Ubuntu VMs…that share though serves my other three servers including the Z840 and since it has 10G networking its all nice and fast - but thats what works for me. I like to be able to spool up a new VM in a few minutes to try out some new thing I found then kill it if I dont like it and so its just what works for me…you have different requirements though of course so if truenas is all youre looking for then go with that

I really like your suggestions. But No I’m not truly set on TrueNas. My Son-in-law likes unraid, but I’m not really into paid applications when most things linux are free. I have been kicking aroumd all of the major Nas OS ideas but have not been able to find what I truly wnat.
My main concern is data security/redundantcy. I really want to play with VM’s and get better with learning more across the whole . Be more knowledgeable and secure in that knowledge, rather than guesssing at it like I am now. So again this is a Work horse. Like the pickup you buy that will haul or pull thimgs but still get you there in style, yet never let you down when you need it the most!

My HBA is the PERC 310mini in IT mode for simple passive software raid only. I hve had others die like the older Adaptec, LSI HBA Raid cards and the array was gone or corrupted to the point that all data was lost. Backups were available but some data was still lost just the same. With 35yrs in this, I guess you could say that I am a bit gunshy of another tragic loss of data. It reminds me of a House fire and loosing family heirlooms. As such I vehemently agree nd practice 3,2,1 backup.

Sorry back on track, any advice that you have ,I will take into close consideration.

Alot to take in, With this info I am now going to change my mix of small and lrge hd’s to this for my home server.
1- 14tb exos with 5- 8tb contellation dell drives in a raidz1 pool, then mirror the same into another raidz1 pool for redundancy. The boot drive is a 128gb innodisk SATA-DOM. I have 2 pcie 8X adapters for biforcation. But the optanes that I have on hand are only 32gb eac. Should buy others or use these, till they can be updated for the special mirror R1.
Also 2 rear SFF drives are nvme ssd adapter caddies for 2- 1tb gen 3 NVME for the metadata.
Asa a newb in this realm of server builds am I following close enough to what has been said so far. 54tb is more than ill ever need for storage, and streaming. My backup is a r440-10bay SFF, with 8- 2tb NVME 2.5” drives in a zfs raidz1. A 64gb SATA-DOM boot, another pair of 1tb NVME gen3 on the same type of pcle 8X adapter for metadata, no special. Yet!

Not too impressed by those power consumption figures, to be honest. Here is a systemspec with a SuperMicro H13SAE-MF, just to give you an idea what modern consumer hardware with a server board can offer:

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 9 7900 $427.99
Motherboard SuperMicro H13SAE-MF $539.00
Memory G.Skill Zeta R5 4x16 GB Registered ECC DDR5-6400 CL32 $499.99
Storage TEAMGROUP MP34 1 TB M.2-2280 PCIe 3.0 $46.99
Power Supply be quiet! Dark Power Pro 11 650 W 80+ Platinum $119.90
Total $1633.87

I did not include a case, drives or GPU in this, and yes the ECC RAM is expensive right now, but that board does support up to 192 GB of memory.

This system will run circles around the Dual Xeon E5-2660 v4 Poweredge (Tech City has a good comparison), and will draw about ~20-25% of the power. This means from the power savings alone and with a power bill of ~$400 a year, you’d save ~$1500 in power bills over five years. So if you run this constantly for five years, it will have nearly paid for itself.

Not posting this to try and convince you to go for an AM5 system or anything like that; this example build system is by no means perfect and for your purposes would perhaps be a wee bit too small (though the U.2 / SAS port certainly helps for disk drives). I am, however, hoping it will help you realize the leaps taken in recent years and what it does to the value proposition of pre-2018 server hardware.

The fact of the matter is, older server hardware is usually on the second hand market because there is no longer a financial incentive to keep them running. So be careful what you are buying on the used market.

Other than power consumption, though, I don’t really see any big holes in your plan. You should be aware that HDDs are singing their swan song and will completely be replaced by SSDs before 2030; but HDDs still make sense for now and for a few more years, and of course anything you already own should be run until it no longer serves a use case, but that goes without saying. :slight_smile:

End of the day, your setup, your power bills, your money, your foot, your gun. I can only advice from my PoV. So, have fun and knock yourself out! :grin:

1 Like

You would set this up in zfs as multiple vdevs each being a mirror (just for clarification: a “vdev” or “virtual device” is a zfs concept that allows zfs to spread IOPs equally over many devices. Each vdev can consist of a single drive, a mirror, or a raidz array of drives). No other software involved (no lvm, mdadm, other otherwise).
It’s possible to create that zpool from the command line in a single command or in stages. E.g.
> zpool create -o ashift=12 mypool mirror /dev/<14tbdrive_1> /dev/<14tbdrive_2> mirror /dev/<6tbdrive_1> /dev/<6tbdrive_2> mirror ...
or
> zpool create -o ashift=12 mypool mirror /dev/<14tbdrive_1> /dev/<14tbdrive_2>
> zpool add mypool mirror /dev/<6tbdrive_1> /dev/<6tbdrive_2>
> zpool add ...

TrueNAS Scale or Proxmox offer the same in a web gui. I personally learned zfs from the command line and get confused in the gui. I think most people prefer the gui.

Ahh. Being a business changes a lot of my assumptions. Being a business run from the primary residence adds a special twist.

You’ve clearly been working on your setup for a while :grinning:

Also, you clearly have decided to manage IT needs for your business yourself.

With a 3-2-1 backup, one would typically look for a performance tuned setup for the primary storage, and capacity/cost optimized storage for the backup locations.
zfs performance strongly correlates with number of vdevs in a pool. So, the best performance will be a zpool or mirrors.

Lastly, both Proxmox and TrueNAS provide competent platforms for virtualizations. They have more in common (from a users’ perspective) than what separates them (choice of software stacks to implement features).
My recommendation is to check both of them out and stick to one of them. While it’s technically possible to run TrueNAS on top of Proxmox (or even the other way round), I personally think it defeats the point of either tool and adds unnecessary complexity.

2 Likes

Thank you , [wertigon], though you are s good futurist and intelligent younger person than I. I have been around to see servers that used sci drives and ide drive in 10-20mb capacities, that were the size of cinder blocks. And those measure at 8in. X 8in. X 16in. They also used an enormous amount of energy. The servers took up several rooms for one server. We are in a more modern age<where yes I need to be a bit more conscientious about the energy usage. Yet it’ s a money thing for me. I have saved the 14tb drives for the past 2 yrs . And came by the 6tb when i bought my synology in 2018. The rest are server leftovers that I repurposed, not sending them to a landfill/ewaste. The secondhand dell rack was $425 complete off of ebay. And again not ewaste. Growing up in my time we fixed and repurposed all things. Today’s society, just buy’s everything new and throws away everything without trying to repair or reuse anything. Please dont think that I’m upset or trying to berate you in anyway. I am truly born of a different mindset and generation than you. My money is not spent easily but wisely, to the point that this plan has taken me 4yrs to put together to this point. Raising 5 kids and 14 grandkids, 4 great-grandkids;take alot of my time and money as well. Something I don’t like to see wasted.
Your spec for the new system is good and pretty thought out. But if I were to do it that way, I would have gone with an AMD Epyc for the CPU. The Mb will support ECC Reg memory but then for my case that I would use they are approx $360 -490 with hotswap bays. All in over the 4yrs of saving and planning i have spent total- $962 on this build. But all in on my whole setup total build out of my pocket- $3068 all in for 2 backup servers, main server, swtches, closet bulid materials, APC UPS’s, power bars, ventilation,etc. Then , I did all the work myself. So please forgive me and understand that this is not about present, newer tech. As it is about me learning existing tech. and future proofing what I have on-hand now. Again I know that the tone of my message sounds harsh but it is in NOWAY, meant to be abrupt or condescending to anyone anout the comments here. Again I really do appreciate your input and support withall of your comments. I’m just old, and trying to stay relevant with learning new things!

[jode] thank you for the commands I will do as you suggested. I just purchased several books by Michael Lucas on the Mastery of BSD/ZFS. Hoping that they will help in my command line approach to this project and any upcoming issues. Great information here and I am very glad to be apart of your community as I will definitely keep learning! Thanks

Hey, no worries, I get where you are coming from :slight_smile: EPYC is extremely good for what it is, but the power budget is around 200W-300W+ then, which pushes it outside home turf territory in my opinion - but of course for enthusiasts and people that really need it, is awesome! Then there are NAS devices such as the all-SSD Asustor Flashstor that are specialized to be a NAS and has a weak-ass CPU that is not very good for much else than specifically, a NAS. So, as an example, if you were to keep these up 24/7, here is what these could cost you:

Product Bays Power draw (avg) kWh / Month kWh / Year
Asustor Flashstor FS6706T 6 18W 13 kWh 158 kWh
Asustor Flashstor FS6712X 12 35W 25 kWh 307 kWh
Ryzen 9 7900 frankenbuild 12 70W 50 kWh 613 kWh
Dual Xeon E5-2660 v4 server 12 170W 130 kWh 1577 kWh

The above table shows a rough example of what you are paying from the wall for each setup. Please note, that these are not actual numbers, just in the ballpark of what you will be spending in electricity each month/year for each of these setups. Every watt you save, will let you use roughly .72 kWh less per month or 8.76 kWh less per year. Now is just a question what you are willing to pay to keep this running. :slight_smile:

Of course, one final note; you may not need to run this 24/7. You could just run the servers when you need to and start them up via IPMI, and this save a ton of costs. Your setup is old, but it is not obsolete, and won’t be for a while yet.

Also, like you… Apologise if the above comes down as unfriendly, I do see your point and while I do have some slight disagreements with it, I do agree your setup is pretty sweet apart from the power consumption. Just wanted to show my thoughts regarding that.

2 Likes

Thanks and I’ll remember all of this for my next build at 70. That will be in 6yrs. Lol

Has anyone had issues with ipmitool not running or enabling in proxmox 8.1?
It installed but wiil not diable auto setting on my dell XC730xd-12. Idrac 8 & 9 ent. Will not spin down any fans lower than 96%. Lifecycle has them set in the thermsls to 23%, and power management is minimal to performance. Any suggestions? At 23%, I am able to keep the exhaust temp to 38C for 4800rpms. Now that I have install the lasest ver of Promox all idrac setting have gone south!

This is a R730xd just hypervised web appliance. I could use some advice as I have tried installiing proxmox ver6,7,now 8.1 the latest and get the same issues. I also changed out my Idrac card. And imported the config file. Then i resett it to factory, and changed just the ip user and fan states only. Same results.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.