Old Gaming Rig into Truenas Scale Server. Hardware =?

Hey Guys!

Last year, I have upgraded my gaming rig to a new, built from scratch system, which leaves me with my old and still perfectly funtioning system.

I would like to turn it into a TrueNAS Scale Server, with about 30 to 50 TB of storage, running Plex in a VM with GPU passthrough, another VM with a second GPU (GTX 970) passtrough (To run as a Media Player because my TV is too old for updates), as well as 10 or so Docker Containers.

My old PC specs are:

  • Intel Core i7-4790k
  • MSI Z97 Gaming 9 AC
  • 16 GB Corsair Vengeance DDR3-1600
  • GTX 1070
  • Samsung 840 EVO 500 GB SSD
  • WD Black 2TB HDD
  • WD Black 4TB HDD
  • 2x LG BluRay Drives, both flashed for ripping.

I will have to buy a new PSU (I hate the old non modular one), a new case and a new CPU Cooler. Those are all quality of life upgrades that are very well worth it to me.

Here’s the problem / uncertainty:

1.) I have read that TrueNAS Scale likes, or rather needs, ECC RAM, which my old MB doesn’t support. Apparently, some MBs might accept unbuffered RAM, but I read that even that’s uncertain.

2.) Also, my old Mainboard only suppoerts 32 GB of RAM total, which is quite low for TrueNAS. Can I add more or does that simply not work?

3.) Those old HDDs are probably unusable in a system like that, right? or is there an application for them? I am thinking of getting either 6x8 TB @200€ or 4x16 TB @320€ anyways but I dont wanna use the old ones as door-stoppers.

Do you guys think a config like that could work? What can I do to make it work? Or should I abandon ship and only take the GPU and SSD?

Thank you in advance for your advice!

Will

Proxmox doesn’t require ECC, so consider switching to that over TrueNAS Scale. Both are Linux-based anyway.

As for drives: if you foresee capacity issues in the future, consider using a disk-shelf. The computation is still in the NAS, connectivity between them via an external HBA (host-bus-adapter), which has its ports on the backside of the card instead of internal connectors, required when the drive cage is in the same case. I happen to have a few NetApp DS2246 units, but those are for 2.5" drives only. Which is conveniently standard SSD size :wink: You can get those DS2246’s fairly cheap, but the drive trays usually come separate and those add up when you need 24 per unit. Word of warning though: these are fairly loud on startup and after booting still not quite quiet, but they also lead to the rabbit hole that is High-Availability storage, CEPH clusters and more expensive wanna-haves :money_with_wings:

HTH!

This has been debated extensively on the iX forums.

My personal experience? Running 2 TrueNAS core systems (originally freeNAS 9) since about 2016. I have never had a problem using regular non-ECC RAM.

I will point you here: Tom Lawrence’s video on ZFS memory requirements

ECC memory has traditionally been more expensive. I would rather spend that money on more memory than reap the potential benefits of ECC.

1 Like

If you’re going to use non-ECC memory for a home server, at least take it off of XMP and/or remove whatever manual overclocks from the memory. The m.o. changed from raw performance to stability by going from gaming rig to server mode. Likewise may want to revisit whatever other tuning you may have done and wind that back to stock and/or power saving modes. :slightly_smiling_face:

Why is 32GB low? Alternately, what workload are you intending to run that is going to require a ton of memory?

You could, hypothetically, use them in the short term to set the pool up initially if you are going to use mirrors instead of a RaidZ# option. I don’t know how critical you consider the data you’ll be putting on them, but in either event the non-stripe options for the mixed drives would leave you with 2TB of space until the other drive is upgraded to 4TB or larger.

The caution I might give is in power usage though. If your power prices are high, it’d be worth running the numbers to see how long it’d take you to make up the initial price difference between the 4x16TB vs the 6x8TB, as keeping the extra 2 spindles running may or may not be consequential for you.

2 Likes

That is something I’m gonna look into, thanks. I could keep the NAS portion on my Synology and have all the VMs and Containers on that proxmox machine.

Yeah, lets leave that for future me, when my bank account shows a less dreary amount :wink:

Thank you for that!

I’m not running anything other than XMP at the moment, and I’ll turn that off. :slight_smile:

I saw a few Videos and Screenshots, where servers had 64 GB and already more than half of it used up, so i assumed 32 is too low? Workload is gonna be some Containers for, lets call it “media management”, and 2 VMs.

Not really a factor for me, i pay about €1,30 per Watt per Year.

One big problem with that core, is that it draws like 100W idle.

Here is what an AM4 system with an idle draw of ~50W would cost, with a stronger CPU but equivalent in most other metrics except for the weaker video card:

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 5 5500 $98.99
Motherboard MSI A520M PRO $86.00
Memory TEAMGROUP Elite 2x16 GB DDR4-3200 CL22 $52.99
Video Card Sparkle ECO Arc A310 4 GB $99.99
Total $337.97

Now assuming a 50W difference on this, you would save up to 438 kWh per year. If you, like many other people, pay $0.15 per kWh this equates to savings of $65 a year or a five year payoff time for the hardware.

This was just a quick exercise of what a more modern system can do for you, meant to add to your decision process. It is time to put ye olde faithfuls to rest IMO, but the decision is ultimately yours to make :slightly_smiling_face:

The tdp is 88W for this cpu so wouldn’t average draw be closer to 50W? (With the other components mentioned I agree the idle power consumption would be fairly high).

To the OP:
Consider if you really need VMs at all. Also, does Plex not have a web interface? If you pick jellyfin you will get a web client which should work for most devices attached to a TV (i.e. no media player vm needed). For more flexibility in terms of the underlying filesystem (eg. Are you married to the idea of zfs?) you could try openmediavault.

If we consider only the system and one SSD with the 1070 drawing ~7W-10W idle, we’re talking ~75W idle figures for the 4790k.

The 5500 system outlined above should draw around 35W-40W from the wall socket. Not as good as the Asustor Flashstor 6 bay, which is 24W fully loaded with 6 drives, but still really good.

1 Like

The default behavior of ZFS on Linux has been to allow ZFS to use up to half of the available memory as a cache to speed up accessing your most used data. This is tunable, either larger or smaller depending on how much RAM you need to use elsewhere for the host, containers, or VMs.

This thread talks about it a little bit:

But the latest release of TrueNAS Scale, 24.04 / Dragonfish, includes the updates where the ARC should now be able to use as much RAM as it wants without causing an out of memory issue.

This isn’t “ZFS is RAM hungry” so much as “ZFS sees RAM is available and is going to use it to go faster, unless something else needs it.” It’ll do that if you have 8GB of RAM, or a TB of RAM. :slightly_smiling_face:


This is what I personally ended up landing on, rather than trying to contort everything into an all-in-one on TrueNAS. I still have that all-in-one box and try it again every release, though.