My first professional NAS

Hello everyone, hope you’re having a great week!

Earlier this week, I was tasked with picking hardware, building and setting up a TrueNAS machine at work to replace the file server.
You see, the idea from my seniors was to migrate the file server to proxmox and setup four 8TB SSD in the cluster, but turns out it made it ten times slower.
The criterias for the hardware were as follow:

  • Intel platform
  • Intel ethernet chip
  • 128GB of RAM
  • tons of PCIE
  • 8 sata ports

ECC was also added to the list of requirements after discussing it further and pushing for it. Unbuffered is not that much more expensive and we’ve already seen btrfs errors caused by a faulty RAM.

The motherboard I ended up going with is the Asus Pro WS W680-ACE paired with an i5 13500 and a kit of 4 x 32GB DDR5 (KF552C40BBK4-128). From what I gather ECC should work with these together.
Oh, and a 1TB Solidigm P44 pro as cache drive. The only issue is that it has 4 ports but that’s not so bad, it has a good amount of PCIE slots.

The parts should arrive next week. But I admit I was not able to do more than half a day of research on the hardware. :x maybe I missed something.

I’m excited to get started but I’m not sure what this post will be… Probably an open build log!
Does anyone have experience with this kind of thing they care to share? I would love to discuss it.

Either way, thanks for reading!

Does anyone have experience with this kind of thing …

If this build is going to host anything that could have consequences if lost, destroyed or simply unavailable should wild unscheduled downtime happen, then don’t do whitebox build from consumer parts.

They do excellent job in their designed usecase, which is not 24/7 operations. Hardware designed at best for 8h/5d operations and 5y max lifetime is risky bet and not that much cheaper.

It’s a headache in private life and very risky in professional life. If not, then good luck, it will be interesting experience.
Its definitely doable, just risky and vendors will definitely not support you in your use case (if at all).

Regardless of above I would strongly recommend creating second differential build from lower end enterprise parts, i.e

  • xeon silver / epyc on lower end
  • ECC dimm not outside of JEDEC spec
  • supermicro server board
  • follow verified compatibility lists essentially

In all likelihood it will not be much more expensive than original build. Maybe even cheaper if you go though refurbished hardware.

2 Likes

If ZFS is the goal I’d highly recommend FreeBSD over Linux but that’s up to you as it performs better and is considered to be more reliable overall. Sadly TrueNAS uses at least for now quite dated versions of both operating systems so hardware support for relatively new hardware is sparse.

Americans seems to love Supermicro, I’m personally not that fond of their hardware overall and I’d probably just go for a Fujitsu server and call it a day.

But you should highly consider what @greatnull mentioned, even a support agreement along with hardware from iXsystems might be something to look at.

2 Likes

Yeah, I absolutely understand the impulse to simply build the solution piece by piece. Its what I would have 5 years ago as junior tech if allowed.

Its amazing and very educational experience, doing the homelab live, but understand this:

  • if you build it, then you support it fully through its entire lifecycle
  • parts are not standardized, compatibility must be verified by you for every part
  • you must keep spares to mitigate future failure,
  • there will be software edge cases, and worse hardware edge cases, and since you are not OEM, tough luck solving them
  • if build is made from consumer hardware, support begins and ends with you.

If its some sort of internal sandbox enviroment, then go for it. If its actually used for mission critical production operations, either buy supermicro config or go to big OEM vendor and buy support.

Its not worth the risk.

2 Likes

Nice position to be in! As you’ve already ordered stuff I won’t comment on that, but what I would suggest is buying/building another machine that receives frequent backups. Then your final backup could be backblaze which works a treat…once you’ve tested it and avoided unwanted transactions that cost you money(!). It wouldn’t have to be anything special, just a 2-4 core machine of some sort. Just a thought for you. Good luck!

Pants, someone beat me to it :roll_eyes:

1 Like

:+1: :+1: :+1: :+1:

1 Like

Haha, yeah. You’re both right, and I kinda rushed in to be honest. ^^’
Nevertheless I’m ok with possible headaches down the line.
We have a backup system that works like this: hourly local backups, daily incremental backup to a dedicated machine, fortnightly final tape backup stored in a very secure location outside the offices. And as we need a solution sooner rather than later, I’m okay with instability for now.

I will take this advice to heart though, I’ll try to gather the needed hardware over the following weeks so we can have a second machine for stability’s sake. I’m in Europe so results may vary :B
I’ll probably log it as well.

Do you mean standard FreeBSD? I know TrueNAS also has a version that runs on FreeBSD but I’m not sure if it’s what you mean. Either way, I’ll definitely look into it. I’m sure I’ll find something looking it up but do you have any particular resource you could share? o:

Either way thanks guys, it’s always great hearing from more experienced people!

For the future if you want to run TruNAS Core/Scale you should seriously consider iX Systems who created and maintains TruNAS but also has hardware solutions and support:

45Drives is in Canada. They build server chassis that are meant to be used with TruNAS but are OS agnostic. Also provide service contracts and support.

Generally recommended that you deploy TruNAS Core as it is FreeBSD based and has a proven track record of being very stable and reliable. Scale is still considered a “Beta” operating system.

1 Like

Either FreeBSD or TrueNAS Core, keep in mind that TrueNAS targets a rather old release version and I don’t know what’s been backported in terms of drivers.

As far as FreeBSD goes most of it is convered in the official Handbook, you can dive much deeper once you’re familiar but it’s a good start.

1 Like

As a Sysadmin and someone who made this mistake I cant agree more. I thought going this route as a cost saving measure would be fine for what I’m doing but it became a big headache after having issues with my motherboard and HBA.
If you’re going to go this route verify that someone else has tested the hardware combination for your use case and be prepared to spend time digging through logs to find issues because its usually not a matter of “if” but “when”.

Let me say I love working in IT and I’m all for DIY when you’re just getting started and using it to learn and when you have the time to dedicate to diagnose the issues but after a 60 hour week of fixing things at work the last thing you want to do is go home and have to start pulling equipment out of a rack to spend the few hours you have trying to fix or upgrade something. The cost savings just isn’t worth it when you want something that “just works”.

2 Likes

:slight_smile: Well, sometimes if you don’t rush into things, you’ll never get anything done!

That’s great you’ve got tape though.

It is recommended, this might be merely my perspective, but many solutions don’t accept that failure of hardware doesn’t happen, apart from TrueNAS. It’s the very reason why the Snapshots/replication exists, along with convenient backup of the config file. Because I don’t do IT full time I have:

Daily Server (24/7, with BackBlaze)
Backup Server (run when many file changes have been made)
Another Backup server (for cold file storage and active files)
Yet another backup server (for cold storage backup only)
Then another backup server (for a 3rd copy of the cold storage in a separate building on the same site)
Spare server for testing

Bear in mind I have this redundancy because I don’t do this as a day job…so the extra protection is partly to protect the data from me! :laughing: These are also not ‘proper servers’, but a mixture of retired Xeon workstations, i3-9100 CPUs and G4560’s.

I didn’t really mean to write this much, sorry!

2 Likes

Yeah what he said. Buy a new or used Supermicro or a IBM with couple of Xeon CPUs but basically something that has a real Disk controller with a backplane that can fit at least 8 Disks where you can also live remove disks aka hot plug.

1 Like

Your post is not very clear.

What do you mean by a ‘file server’, did you mean the TrueNAS?
You virtualized TrueNAS and put it on a Proxmox cluster?
Did you actually have a cluster of 3 physical Proxmox servers and did you use CEPH or ZFS and how many disks in each server?
If you had 3 nodes are you now just going to throw 3 servers away because someone misconfigured them and they were slow and replace 3 nodes with one consumer grade logic board?

We can probably fix your Proxmox Cluster. Cluster’s perromance is dependend on Network especially if you ran CEPH and if you have 1gbps NICs of course it was slow. These are assumptions because I don’t know how you setup the cluster.

Why did you mention btrfs when we’re talking about ZFS. Are we talking about btrfs? Ok so btrfs errors just have to do with RAM I guess and the reason you went with ECC.

I don’t see a reason why you would need a separate SSD for cache because your disks are all SSDs. There will be no speed increase. RAM cache is faster. You would benefit from a SLOG on a SSD if you had HDDs for synchronous writes and syncronous writes happen for databases or for NFS. It’s not needed in your case.

Here are some more critiques of your buying habbits. You seem to be buying consumer grade equipment for business needs. The SSD you posted here has GAMING all over the website. It doesn’t seem to have Power Loss Protection feature so it basically lies about it’s synchronous writes speed.

You would need something like this. If you look at the specs under Features you will see Power Loss Protection

PLP is explained here

The Solidigm disk Oh my God, I just downloaded their specs and it’s not even a PDF. It is in Excell format, who does that?
Anyway they don’t have Power Loss Protection feature so don’t use that for ZIL or SLOG. Return it.

Unless you will run three of those Motherboards in a Proxmox cluster I would also return it and buy, as couple other guys also suggested, business grade equipment. Supermicro, Dell, IBM.

Other guys below suggested FreeBSD. You can do that, but you can also just install TrueNAS directly on the server, nothing wrong with that if you want TrueNAS. The important thing is how you utilize the 4 disks. Do you do 2 vdevs with 2 mirrored disks each giving you 50% of the total disk space, so 16TB and two disk redundancy. Or do you do raidz1 where you only use one disk worth of space for parity. TrueNAS core is based on FreeBSD, TrueNAS Scale is based on Linux, each have ZFS so there’s nothing wrong with the ZFS part.

I am more interested in the Proxmox and how was it implemented and why it became 10 times slower? And 10 times slower than what?

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.