Welcome to a short build description of my NAS. Three or so month ago I decided to move my harddrives from my workstation to a dedicated machine. First I bought some harddrives and then I bought everything I though I was going to need for the rest of the machine, then I bought more harddrives, realized that I want to run a Kubernetes cluster on the machine as well and redid every part of it again, bought some larger SSD and then I bought some additional harddrives. The processor has an option to limit the TDP from 65 watts to 45 watts and the motherboard offers an additional option to limit the TDP even further to only 35 watts, which is the option I did choose for this setup. In the end I am quite pleased with how it did turn out.
The machine runs Rocky Linux 9.2 with a dedicated virtual machines for TrueNAS Core and a couple additional dedicated virtual machines which make up a Kubernetes cluster. I did limit the TrueNAS virtual machine to 32GB of memory, but gave it, with the Samsung PM9A3, an rather large, almost 2TB huge, L2ARC to cache data as needed. Apart from the L2ARC the ZFS pool does feature two vdevs with eight drive raidz2 each. All data drives are connected via the HBA. I added two hotspares, which are connected via the onboard SATA controller. The pool also does feature a 100GB mirrored slog and a 2TB mirrored metadata special device, both placed on the Samsung PM983's.
Nice! But I don’t see a lot of room for (future) expansion. First, the case doesn’t support it, then there’s the limit on PCIe lanes from the CPU (APU in your case) and mainboard so a disk-shelf setup is out.
To be honest I feel like I already went a bit overboard. The original plan was to go with half the capacity, but one thing let to another and here we are. I also have to say it was not easy to even find a case that has as much space for 3.5" drives as this one has. There are cases sure, but then again those cost a significant larger amount of money. I would have liked to go with one of those ASRock Rack motherboards which feature IPMI, but since I had this consumer board from Gigabyte already laying around I choose to just stick with it.
I figure I could have gone more in the enterprise direction and maybe use an older Epyc, but then again I did try to use what I already had and not let this get too expensive. This machine does not pay for itself …yet!
When you think I sold a kidney for that I rather not mention that I already have eighteen additional drives to build a full offsite backup in the future …
The fans are set at 40% PWM and they are noticeable but not too loud. The harddrives are mostly silent when written to or read from sequentially. However under a lot of random access they are quite loud. Unfortunately I have nothing to actually measure the loudness for you. I hope my description still helps.
Those drives in the NAS were new from various local online stores, price was about $235 for each drive. Sometimes more sometimes a little less. I did not order them all at ones, but over a long period of time. It was a little different with the additional eighteen drives I bought for backup, those were way cheaper at about $160 each, but those I bought as used. For those used drives I run a long SMART test and at least one pass with badblocks to make sure they are in working condition.
You can get a potentially fake and probably Chinesium decibel meter from Bezos for around $20. They’re fun to play with for an hour or so before it gets thrown in a drawer until the batteries die.
Is this the one to go in the co-lo? in which case, I don’t think you need worry about noise.
I guess you have done a burn in test with the lid closed, and the drives stay in a reasonable temperature?
does not look like a ton of ventilation out the back, but there is some …
And you are downvolting the CPU, so the system should run more efficiently?
It stays in a reasonable temperature range. Obviously the CPU does not get that hot on only 35 watts. The Arctic fans are build for static pressure so they do their job just fine, only thing I needed to modify was to add the Noctua fan since the NIC and the HBA tend to get toasty, as they are build for server like airflow configurations fitted only with passive heatsinks.
The three fans in the front push so much air, that it is being pushed out of the ventilation slits at the back of the case. The two fans in the back pull a bit, but I think it would work even without them.
Yes, I mentioned it in the first post. Those APUs have an option to limit them from 65 watts down to 45 watts. When I wanted to apply this setting I saw that my motherboard offered an additional option to limit the processor to only 35 watts, which is what I did.
What’s the system idle power consumption like without spinning down the drives?
I’m hoping to make a <200 watt (idle, drives spun up), 30 drive NAS in the next couple months once supermicro releases there AM5 motherboard and am interested in a point of comparison for what to expect.
FYI: my EPYC system draws about 140W while syncing a RAID6. It’s a 7551P CPU, Supermicro H11SSL mainboard, 4x 32GB Samsung RDIMM RAM, 4 HDD’s, 5 NVME drives and a dual 10 Gb SFP+ card. (this card is not in use ATM, I’m working on switching my home network to fibre) I noticed that just the BCM sips about 10W when the system is off.
That’s actually better than I would have expected.
I’ve noticed the aspeed IPMI chips are power hogs too, the most current chip they make, AST2600 has dramatically improved power consumption based off of the thermal camera views I’ve seen of motherboards with AST2400, AST2500 and AST2600s on them; the AST2600s are consistently >25 degrees cooler than the previous generation chips with similar airflow across them. They made the jump from 40nm lithography to 28nm with the AST2600 which is the source of the improvement.