NAS Upgrade Suggestions

True, still have a look at the mid range Asus boards, I think at least a couple have ECC support. A $420 proart creator might be overkill here but perhaps a $220 TUF B650M Gaming Plus WiFi is reasonable? It lists ECC support as far as I can tell.

On a side note, there is no reason left to not support ECC across the board on mid to high end B, X and Z tier boards now, nor to pay the extra $15 for the memory - just turn off the ECC if this is not important. As long as ECC support carries a $200-$300 premium few will bother with it. After all, chances of data corruption is usually very slim as it is.

If Intel isn’t careful a next gen RPi platform with soldered ECC and four to eight m.2 slots will completely change the game… :slightly_smiling_face:

I’m going to look more into boards, whether I go AM5 & X670/B650, or Alder Lake and an Asrock industrial IMB-X1712 either way I’ll have ECC support, room for plenty of M.2, good amount of Sata ports and PCIE lanes. Assuming I go ahead and chop a poweredge in half for a case (highly likely for fun value alone). That just leaves the software choice, do I just suck it up and deal with TrueNAS and having to look a lot of things up? doesn’t seem like there’s many feasible alternatives at this point for a DIY machine

So that epyc kit from the last week has already arrived via fedex. Extremely fast turnaround time from china to europe, and only for 35 USD.

Seller included very understated invoice (550+35 USD → 48 USD) by default, without explicitly offering or me asking, so I didn’t bother to correct that slight error to fedex and to customs by proxy either.

Contents:

  • Epyc 7302P (second gen, 16C , 155W TDP, 128 MB L3 cache
  • 8 x 16GB DDR4 2133Mhz ECC RDimm modules
  • Supermicro H11SSL-i ATX board (PCIe v3 only, but about 300USD cheaper)

First time ever I didn’t have to pay duty or VAT, which is mighty strange. Fedex courier certainly didn’t want any cash on delivery, so :slight_smile:

I don’t have necessary cooler or free time yet, but I suspect I will have some time on thursday or friday.

I will then post some basic performance results and power draw under different scenarios.

I am very curious about the results.

3 Likes

So the noctua finally arrived today and did some basic functionality testing.

Result were strange indeed, but testing setup has some oddities as well:

  • i use HDPLEX 250 GaN power supply i have free right now, it is powerful enough to feed the kit, it just does not have third 4 pin power supply plug for motherboard. Will retest with normal ATX later.
  • did you know mounting holes for SP3 socket cooler are assymetrical?
  • supplied memory modules are dual rank, so if 8 modules are used, memtest reports 16 channel memory configuration. WTF? I didnt know 2R memory modules count as two for the memory channel configuration.
    • in this configuration, memory bandwidth hovers around 10-12 GB/s. Thats seems way too low, like single channel mode low.
  • if I take out half the modules, keeping the remaing 4 x 16GB modules in smicro suggested slots for optimal configuration, memtest now reports octa-channel memory configuration

So according to indirect reference data and past observations on different platforms, it looks like I am getting slightly above single channel worth of performance out of this.

I did quick and dirty geekbench with this suspect configuration:

Entire server did not go over 150W from mains during multi core testing, which is very nice indeed.

Still it might be anomaly measurement, due to GAN power supply mentioned above.

Thing to verify:

  • does memtest performance values hold water → recheck via some sort of stadartized tool from live linux iso <=== doing this right now
  • Does the octachannel config perform poorly or is it mistakenly reported as octacchannel insteal? Drop memory to single dimm and compare values
  • Get proper ATX PSU to check if I am power starving the platform instead
  • Borrow, beg or steal known good rdimm modules of ideal frequency 3200. Are there any L1er in Prague willing to help? Will offer beer for company of you dimms :slight_smile:

FYI: Memtest mem bandwith measurement is full of shit, when it comes to data throughput. If used, it should be compared only to its own measurement, otherwise headache shall ensue.

Reference data mentioning expected realworld valus instead of theretical maximums:

DDR4 2133 MHz Dual channel 28 GB/s (extrapolated)
DDR4 2133 MHz Quad channel 55 GB/s (linked source)
DDR4 2133 MHz Octa channel 110 GB/s (extrapolated)
DDR4 3200 MHz Octa chanel 175 GB/s (linked source)

Now easiest way to test memory subsystem would be:

And results are:

[liveuser@localhost-live mbw-2.0]$ ./mbw 10240 

4 DIMMS EPYC config 2133R DDR4

AVG	Method: MEMCPY	Elapsed: 0.70946	MiB: 10240.00000	Copy: 14433.543 MiB/s
AVG	Method: DUMB	Elapsed: 3.33173	MiB: 10240.00000	Copy: 3073.475 MiB/s
AVG	Method: MCBLOCK	Elapsed: 1.32986	MiB: 10240.00000	Copy: 7700.046 MiB/s

8 DIMMS EPYC config 2133R DDR4

AVG	Method: MEMCPY	Elapsed: 0.73030	MiB: 10240.00000	Copy: 14021.623 MiB/s
AVG	Method: DUMB	Elapsed: 3.30027	MiB: 10240.00000	Copy: 3102.780 MiB/s
AVG	Method: MCBLOCK	Elapsed: 1.30741	MiB: 10240.00000	Copy: 7832.288 MiB/s

2 DIMMS RYZEN 7950x 6000DDR5

AVG     Method: MEMCPY  Elapsed: 0.04448        MiB: 1024.00000 Copy: 23019.047 MiB/s
AVG     Method: DUMB    Elapsed: 0.13849        MiB: 1024.00000 Copy: 7393.956 MiB/s
AVG     Method: MCBLOCK Elapsed: 0.05571        MiB: 1024.00000 Copy: 18381.924 MiB/s
1 Like

Progress update: the Dell Poweredge R510 arrived today, has a Xeon [email protected], 16GB RAM, 2x750W PSUs, 1 PERC H700, and 1 PERC H80.

Makes me sort of question dissecting it, but oh well. Parts will go back into the ebay-verse.
Ive taken and made all the measurements to be able to fit an ATX board, so I’m ready to start with the band saw, but will be out of the country for a month or so before I will get the chance.

Since many of you expressed interest, I will be making a new thread for the construction and setup of the NAS, maybe a blog like some other folks since I seem to just have continuing projects that others could find interesting or useful.

Thank you everyone for your contributions and suggestions, build list as of now is as follows:

I5-13500
Asrock Industrial IMB-X1712/X1314, depending on what I can find
4x32GB Crucial 3200MHz UDIMMS
Noctua NH-L9x64
Asrock Arc A380 Low Profile
3x 12TB Seagate Ironwolf NAS HDD’s (eventually x8)
2x 2TB Solidigm P44 Pro (for cache)
1x 500GB Sabrent Rocket 4.0
EVGA 550W G3 (to be replaced with Silverstone GM500-G for redundancy eventually)
Dell R510 Franken-Chassis hopefully

Going to give TrueNAS another try. While its certainly more complex than DSM, there’s also an enormous wealth of how-to’s and guides.

So, here is a PC Part Picker list:

PCPartPicker Part List

If we remove the Ironwolves and add $200 to the case, that is a total of ~$1600 for an 8-bay HDD capacity. CPU is kind of an odd one, a 13600KF is $30 more and a 13400 is kinda adequate for what you need it for. But hey, your money, your stuff, your decisions :slight_smile:

[edit]May I also suggest this as a Chassi?

Just a couple thoughts.

Think about using a 118GB p1600X for your install location for truenas rather than the 500GB nvme.

Using 4 12TB drives as your starting point makes some sense. You can do striped mirrors, you can do a 4 wide raid z1… and then just stripe in another 4 wide raid z1 when you buy more drives.

If you aren’t married to 2U I think this is a better choice of case:

Indeed it would, if it weren’t for:

The one I linked has a depth of 18.5", yours have a depth of 31". Would love a 3u version of the one above though, so you are not constrained to LP cards.

1 Like

Agreed. Was looking at pricing and at the time 13500 was only $20ish more than the 13400, but now its $40 more, I thought the 13400’s 16 cores was adequate but figured 4 extra threads wouldn’t hurt for 20 bucks. I think you’re right, 13400 is a better call.

I considered that one as well, but the pricing was just unacceptable to me. If the poweredge turns out to be too much of a pain I may just go that route and save me the time, but it seems a shame not to try.

I’ll look into this, my default for years for anything has just been slap a Sabrent SSD boot drive in it and call it good, what would be the benefit? from what I can tell P1600X tops out at 1760/1050MBps W/R, whereas sabrent tops out at 5000/2500 W/R (this is for Sequential). When I ran TrueNAS before, I ran the boot drives in a mirror, is that still best practice?

This would be slick for larger disk arrays than 8. Sliger’s cases are super nice, I have a CX3152a for my gaming rig that has been awesome.

if you had but two inches more, this might have been perfect…

Or this:

While I can understand $479 is a lot, unfortunately your shallow rack does make this a problem. Atleast you have a plan B… :slight_smile:

Yo! To be honest, a redundant boot drive in a home environment is a waste of a m.2 slot. Think about it, if your boot drive fails… just import the pool into any other openZFS system. It might take some time to reconfigure but you won’t be losing your data. If you are using consumer hardware nvme slots are at a premium and I’d rather have more nvme storage than redundant boot.

The big reason people like the p1600x drives is that they have 10 micro-second latency for 4k random read and write and will do much better than nand flash for typical operating system use. Reason number 2, they have a super capacitor in case your power goes out. Reason number 3, the 118GB version is about $50.

As a counterargument - if your platform offers multiple x16 cpu connected slots and supports bifurcation, then you can use cheap 4x m.2 carrier board like HYPER M.2 X16 GEN 4 CARD.

There nearly useless in consumer setups, but here the offer extendability and very nice cooling on top.

“Multiple x16 pcie lanes connected to the cpu… and they support bifurcation?”

What kind of crazy dream is this? We all know consumers could not possibly need more that an x8 x8 non bifurcating setup.

Threadripper pro and epycs have 128 pcie lines available. Its doable if you dont have monopoly a feel like segmentig the market to death.

Well, that is kinda his point though… Consumer != Prosumer != Professionals != Enterprise market.

  • Consumers are stuck at 7950X / 13900k and generally happy there

  • Prosumers can see the value of an EPYC, Threadripper or Xeon build but can not generally justify the extra $1000-$2500 an entry level HEDT platform costs, so stuck at consumer level but unhappy

  • Professionals just buy whatever tools they need and the costs and lock in effects are of little consequence (see Nvidia, Adobe, Apple for a few examples). In their segment it is all about how efficient they can do their job, if a $100k investment result in a 3x efficiency boost they will almost always opt for that.

  • Enterprise looks at the full cost of ownership, and if they can pay $50k to save $100k in power bills and/or real estate they will do it in a heartbeat (replacing 10 6 core machines for a single 64 core machine, as an example)

I would love to see a motherboard with three x16 ports sharing 24 lanes between them (and allows any bifurb combo, x16/x4/x4, x8/x8/x8, x6/x6/x12…), and another 32 lanes for m.2. In theory that is the actual Xeon W-3000 series, in practice, if all I care about are the number of PCIe lanes, why not go full TR / EPYC that offers 128 lanes over Xeons 56-64 lanes, if I must pay the same price regardless?

Sorry for rant and OT, feel free to take this to a new thread.

That’s a great idea that I somehow totally forgot was a thing. I haven’t done the math on the PCIE lanes but I’ll definitely check that out. In my application (TrueNAS Core) with spinning rust, how much NVME as cache is useful? is there a point where it doesn’t make sense any more? or is it just dependent on usage, if I never max out that level of cache its considered enough?

With everyone (seems like) doing something with a nas (new / upgrades) The question I have is where do you guys buy / get your drives from? I’ve been looking for a few to upgrade a nas for capacity but haven’t found any really good deals on any. Just looking at either 8 or 10 tb drives up to a total of 8 drives.

I’ve just been watching prices on Pcpartpicker for drives, buying on newegg/amazon, I know many folks vouch for buying used off ebay, but I don’t have any experience with that. Prices are falling fast enough I’m okay with new prices for the peace of mind

1 Like