Project - New TrueNAS Build

So i have finally decided i’m going to do a major overhaul on my TrueNAS setup! Long story short here servers are a disease if you get offered one turn away before it’s too late.

My existing setup is currently running as an ESXi VM, running TrueNAS with two LSI2008 controllers passed through with 10 GB of ram allocated on my old ESXi host which has an AMD FX-8150 CPU.

The disk comfiguration consists of three main pools mostly RAID-Z1s but also a mirror.

  • 3 x 8TB
  • 3 x 2TB
  • 2 x 3TB

Not the most ideal setup but this dates way back to my first build in the FreeNAS 9 days.

As for the new system i have an HP DL380p Gen8 with 12 LFF drive bays and what i think are decent enough specs for a home NAS.

  • 2x Intel Xeon E5-2630L v2
  • 64 GB RAM
  • HP H220 HBA card (new SAS cables coming on there way as we speak).

Some quick notes about the DL380 i have toyed around with using the integrated RAID controller in HBA mode but based on what i have read and my limited experience testing i was not confident with it. While disks were presented to TrueNAS fine it is still using the questionable ciss driver and did not deal well with drives added to the system whilst it was online. Also the fans are not the quietest, something to keep in mind of you are looking at one of these servers.

Moving over to the new NAS i’m looking to consolidate into a single pool with multiple vdevs and datasets for my different data. With my current set of disks i could do three RAID-Z1s in the pool but don’t have high hopes for reliability and data integrity, after doing some reading i am considering the following configuration. So i can use what i have i have a copy of all data on seperate backup disks to be restored to the new pool.

Pool:
vdev A: RAID-Z2, 5 HDDs, 8TB in size
vdev B: RAID-Z2, 5 HDDs, mix of 3 and 4TB
SLOG: 2x 16/32GB Intel Optane Memory M10 (mirrored pair)
L2ARC: M.2 NVMe SSD undecided on any specifics at this time.
Hotspare: 1x 8TB HDD

I have some further questions about how i could optimise my setup here with the use of a metadata disk, different compression, record size, and ashift values. But wanted to get other peoples thoughts to ensure i am on the right track.

Thoughts and feedback on my build plan? Any other ways i can tweak the pool performance?

1 Like

I like to think of used hardware addiction as a chronic condition that needs to be managed with regular medication from a cheap online store. It is much better than new hardware addiction, the acute disease that needs an Apple a day to keep the Doctor away…much more expensive.

Nice server. Does it have 10GB networking onboard or a compute module card? Before investing in a SLOG / L2ARC (not really needed for a pure data storage server) I would upgrade networking.

I’m assuming these will go on riser cards. Make sure they are compatible. I had issues with a couple of m.2 risers with a previous freenas build. I can’t remember the specific models though.

RAIDZ-2 with 5 drives is not the most optimal design. I’d suggest 6 drives or more per vdev would be more typical. I’m sure you are aware that vdevB will only give you capacity of the multiples of 3TiB, as the disks all need to be the same size. It may be better to remove the 3TiB drives from vdevB and increase vdevA to 6 drives.

This all depends on your workload. If you are using VM’s or containers then it is worth investigating. If it is just bulk storage then I’d just leave them at defaults.

Good luck and enjoy the build.

1 Like

I came with a dual port SFP+ 10GB LOM card i’ve thrown in a quad port gigabit card for 1G networking as well. My plan being to use a DAC cable between it and my ESXi host. This will allow plenty of bandwidth for VMs and backups.

Will keep that in mind, thanks.

I’ll have a think about that i assume the recommendation for 6 drives if for best capacity utilisation of the drives in the vdev? Still leaning toward 5 drives for now as i’d like a few bays free to allow me to swap in backup drives for ZFS replcications.

OpenZFS 2.1 is touting ability to add disks to a RAID-Z vdev which could allow me to example to 6 drives in future.

Understand that all my dries in vdev B will effectively be 3TiB all fine by me, just trying to use what i have at the moment. I’m no where near capacity at this stage but aiming to setup my pool and vdevs for decent performance, data integrity, and some amount of expandability going forward.

I would like the possibility of running VMs of a datastore if the performance is adequate. Can get to this part of the build once i’ve got the hardware sorted and the pool setup.

WIll try and keep the thread up to date as things progress, hopefully my experience may help someone else with their build.

Doing some further research on this its important to note this server does not support PCIe bifurcation. So any M.2 storage added would need to be used on a NVMe card with bifurcation or a PCIe to SATA controller like a JMB582 or JMB585 based card.

I’ve picked up a JM582 based card to use for OS drives in a mirrored pair.

As for disk storage i’ve settled on the HP H220 Host Bus Adapter which is just a rebranded LSI 9205-8i.

With this card or probably any other PCIe based solution you’ll likely find the SAS cables to the backplane do not reach and are just a tad too short. I found some G7 SAS cables (part number 498426-001,493228-006,4NOH6-01) which are just over 800mm the ideal length would be around 600mm but i could not find these for a reasonable price. These cables fit fine with some of the excess cable between the backplane and fan assembly.

Got TrueNAS up and running with a pool initialised. LSI controller works a treat!

Unfortunately the server will not boot from the IOCrest card :frowning:. Disks are detcetd in OS and the controller can be seen as a SATA controller in the BIOS and boot selection. However i am unable to boot from devices attached to it.

I also tested an NVMe drive in one of the PCIe slots but alas am also unable to boot from that.

My current setup involves a 128 GB SSD connected to the onboard SATA connector on the main board. the second 128 GB SSD is in the IOCrest card, the drives are a mirrored pool. But if the primary was to fail i would be unable to boot.

If anyone has any experience with PCIe card SATA controllers in servers or more specifically my hardware i would be keen to know if anyone else has been able to boot from an NVMe drive or a SATA device connected to a PCIe controller.

I suspect you need to upgrade the main board to something skylake era or newer to get nvme booting. For amd that probably means first gen Ryzen.

Not sure about the SATA controller. I’ve got one that boots fine in a 2014 motherboard but won’t run at all in a 2016 board. I suspect your current SATA boot drive is the way to go.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.