Hive's NAS - 192TB - poor mans storinator

Introduction

Welcome to a short build description of my NAS. Three or so month ago I decided to move my harddrives from my workstation to a dedicated machine. First I bought some harddrives and then I bought everything I though I was going to need for the rest of the machine, then I bought more harddrives, realized that I want to run a Kubernetes cluster on the machine as well and redid every part of it again, bought some larger SSD and then I bought some additional harddrives. The processor has an option to limit the TDP from 65 watts to 45 watts and the motherboard offers an additional option to limit the TDP even further to only 35 watts, which is the option I did choose for this setup. In the end I am quite pleased with how it did turn out.

Parts

# Part Notes
12 Seagate Exos X - X16 16TB SATA -
6 Toshiba Enterprise Capacity MG08ACA 16TB -
1 LSI Broadcom SAS 9300-16i HBA in it-mode
1 AMD Ryzen 7 PRO 4750G TDP limited to 35 watts
1 Gigabyte X570 Aorus Pro -
4 Kingston Server Premier DIMM 32GB DDR4-3200, CL22-22-22, ECC
2 Gigabyte GC-M2-U2 Mini-SAS Add-in Card -
1 beQuiet Pure Rock Slim 2 -
1 Inter-Tech IPC Server 4F28 Case 19" rack mountable
1 Samsung PM9A3 U.2 1.92TB for L2ARC
2 Samsung PM983 U.2 7.68TB
1 Intel X550 T2 NIC dual 10Gbit
3 Arctic P12 Max front of the case
2 Arctic P8 PWM PST CO rear of the case
1 Noctua 80mm PWM
3 120mm metal mesh filter
1 Corsair HX 750 PSU

Pictures


Software

The machine runs Rocky Linux 9.2 with a dedicated virtual machines for TrueNAS Core and a couple additional dedicated virtual machines which make up a Kubernetes cluster. I did limit the TrueNAS virtual machine to 32GB of memory, but gave it, with the Samsung PM9A3, an rather large, almost 2TB huge, L2ARC to cache data as needed. Apart from the L2ARC the ZFS pool does feature two vdevs with eight drive raidz2 each. All data drives are connected via the HBA. I added two hotspares, which are connected via the onboard SATA controller. The pool also does feature a 100GB mirrored slog and a 2TB mirrored metadata special device, both placed on the Samsung PM983's.
18 Likes

Nice! But I don’t see a lot of room for (future) expansion. First, the case doesn’t support it, then there’s the limit on PCIe lanes from the CPU (APU in your case) and mainboard so a disk-shelf setup is out.

1 Like

To be honest I feel like I already went a bit overboard. The original plan was to go with half the capacity, but one thing let to another and here we are. I also have to say it was not easy to even find a case that has as much space for 3.5" drives as this one has. There are cases sure, but then again those cost a significant larger amount of money. I would have liked to go with one of those ASRock Rack motherboards which feature IPMI, but since I had this consumer board from Gigabyte already laying around I choose to just stick with it.

I figure I could have gone more in the enterprise direction and maybe use an older Epyc, but then again I did try to use what I already had and not let this get too expensive. This machine does not pay for itself …yet!

2 Likes

Did you sell a kidney or something? :wink:

2 Likes

When you think I sold a kidney for that I rather not mention that I already have eighteen additional drives to build a full offsite backup in the future …

4 Likes

Nice, how noisy is it and what’s the power consumption?

1 Like

Where did you get your hard drives from? price per drive?

The fans are set at 40% PWM and they are noticeable but not too loud. The harddrives are mostly silent when written to or read from sequentially. However under a lot of random access they are quite loud. Unfortunately I have nothing to actually measure the loudness for you. I hope my description still helps.

Those drives in the NAS were new from various local online stores, price was about $235 for each drive. Sometimes more sometimes a little less. I did not order them all at ones, but over a long period of time. It was a little different with the additional eighteen drives I bought for backup, those were way cheaper at about $160 each, but those I bought as used. For those used drives I run a long SMART test and at least one pass with badblocks to make sure they are in working condition.

4 Likes

You can get a potentially fake and probably Chinesium decibel meter from Bezos for around $20. They’re fun to play with for an hour or so before it gets thrown in a drawer until the batteries die. :yay:

Thank you!

1 Like

You could always add a disk shelf later if you did want to go nuts. Nice build

3 Likes

That feeling when 192TB is not “nuts” for a single home-labber… and Moar Storage is considered.

I don’t think I have 192TB across all my home machines… let alone just my storage box…

Nice build @H-i-v-e

Is this the one to go in the co-lo? in which case, I don’t think you need worry about noise.

I guess you have done a burn in test with the lid closed, and the drives stay in a reasonable temperature?
does not look like a ton of ventilation out the back, but there is some …

And you are downvolting the CPU, so the system should run more efficiently?

2 Likes

Thank you!

Nope, that’s the one that stays with me.

It stays in a reasonable temperature range. Obviously the CPU does not get that hot on only 35 watts. The Arctic fans are build for static pressure so they do their job just fine, only thing I needed to modify was to add the Noctua fan since the NIC and the HBA tend to get toasty, as they are build for server like airflow configurations fitted only with passive heatsinks.

The three fans in the front push so much air, that it is being pushed out of the ventilation slits at the back of the case. The two fans in the back pull a bit, but I think it would work even without them.

Yes, I mentioned it in the first post. Those APUs have an option to limit them from 65 watts down to 45 watts. When I wanted to apply this setting I saw that my motherboard offered an additional option to limit the processor to only 35 watts, which is what I did.

1 Like

What’s the system idle power consumption like without spinning down the drives?

I’m hoping to make a <200 watt (idle, drives spun up), 30 drive NAS in the next couple months once supermicro releases there AM5 motherboard and am interested in a point of comparison for what to expect.

1 Like

I honestly don’t know, but I will check for you. I don’t know if I will manage to do that today, but I will let you know!

1 Like

thnx!

also +1 on the Toshiba MG08’s, I’m running a ton of them and they have to be some of the most reliable drives I’ve used, up there with OG HGST.

FYI: my EPYC system draws about 140W while syncing a RAID6. It’s a 7551P CPU, Supermicro H11SSL mainboard, 4x 32GB Samsung RDIMM RAM, 4 HDD’s, 5 NVME drives and a dual 10 Gb SFP+ card. (this card is not in use ATM, I’m working on switching my home network to fibre) I noticed that just the BCM sips about 10W when the system is off.

1 Like

That’s actually better than I would have expected.

I’ve noticed the aspeed IPMI chips are power hogs too, the most current chip they make, AST2600 has dramatically improved power consumption based off of the thermal camera views I’ve seen of motherboards with AST2400, AST2500 and AST2600s on them; the AST2600s are consistently >25 degrees cooler than the previous generation chips with similar airflow across them. They made the jump from 40nm lithography to 28nm with the AST2600 which is the source of the improvement.

Is this reply just about CPU temperature? I am also curious about the disk temps. It is a tight package airflow-wise.

1 Like

Naj, just the discs, and all good.
CPUs can thermal throttle and stuff, disks just die quicker, if I’m not mistaken.

But, OP was talking previously about storing a machine in a datacentre, where the noise of the fans need not be an issue, but not the case here.