Sanity Check My Server for TrueNas

Hey all,

I’m planning to upgrade my design & animation studios storage system from 3 low spec Synologys to a single beefy TrueNas server (core or scale, undecided atm). My plan is to then attach a JBOD to it for snapshots, backup, archival. I already back up offsite as well and a cloud backup on AWS.

I’m planning on the following spec and would like a sanity check:

  • Gigabyte R272-Z31
  • AMD EPYC 7443P - 24 Cores 2.85GHz/4GHz
  • 16x64GB 3200MHz DDR4 ECC Registered
  • Dual 10Gbe SFP+ & Built in IPMI
  • Onboard 12Gb/s SAS expander
  • HBA Card for Jbod
  • Redundant Power Supplies

For storage I’m thinking of getting 24 x 2.5" SSDs, aiming for a usable capacity of 100TB. The case supports 26 drives total:

  • 4 vdevs of 6 drives each, Raidz2
  • 2 additional 2.5" ssds for boot drives
  • 2 M.2 capacity for later if L2Arc is needed with that much ram.

For use case this will be used as our production server where around 30 people will be writing/reading to it over smb on a 10gbe LAN.

First, welcome! :hugs:

Next: Linus (that LTT guy) did a video last year on JBOD server storage. You may want to watch it as this is an enterprise setup.

Then: given this is essentially a work platform you may want to consider NVMe drives over SATA ones. Kioxia has suitable 16TB and higher versions, Nimbus Data offers even larger capacity drives, be it SATA. (Linus has a video on their 100TB Exadrive as well)


Ha I actually watched all of his server videos recently which gave me the idea of ditching Synology and going TrueNAS. I refuse to pay the inflated price Synology is charging for low tier components just for that ‘polished’ DSM you get :slight_smile:

I’m looking at an all nvme 24 bay setup atm with the model up for the chassis from Gigabyte but I’m pretty sure I’m going to network cap myself, but I guess that opens up the possibility of a 25gbe network upgrade in the future to fully utilise those drives.

Note the R272-Z31 is specified as having two 1Gbps ethernet ports on board. What add-in card will be providing the SFP+ ports?

So far you’ve described a storage server, but nothing that would use all those cores. There may be opportunity to save a little ongoing energy usage by choosing a lower model EPYC here.

1 Like

I’m going to Grab an intel card that has dual SFP+

I’m going to have around 30 users hitting that server and it will also be accesing 1 or 2 JBODs so I want the CPU headroom to avoid bottlenecks.