[SOLVED] New NAS build: 4- or 8-sticks of RAM in 4-channel Epyc?

Is it possible to use 8-sticks of RAM with an AMD Epyc 7232P? This chip is only 4-channel memory as far as I understand it. I’d like to have 128GB of RAM.

My Needs
I’m building a new NAS using TrueNAS. My usage right now is data backups from only my PC with about 20TB of space, but I recently purchased half a petabyte of HDDs and will be taking that 20TB to another location for offsite backups.

I want to use it for:

  • Backups of all Windows machines in the house.
  • Storing photos and videos of my family.
  • Automatic one-way syncing of photos and videos from Android phones.
  • Video projects for YouTube shared with my editor over VPN or some better tech. These require a lot of data capacity.
  • Plex, but I’ve never used it before, only own one Blu-ray, and I’m not sure how it handles my family videos.

Memory
Because I’m using ZFS, memory is important; therefore, I’d like to know my memory options.

  • Can I use 8-sticks of RAM? If so, is this preferred with 4-channel to maximize capacity at a lower cost?
  • As far as I understand, I want DDR4 3200 RDIMMs, but I’m not sure of the ranking.
  • Where are good places to shop for RDIMMs?
  • Crucial RDIMMs seem reputable. Are those good or should I go with another manufacturer?

Motherboard
I’m looking at the ASRock Rack ROMED8-2T; although, it’s pretty expensive. Is that normal pricing?

A comparable board is the Supermicro H12SSL-CT; also priced the same. I’ve used both Supermicro and ASRock Rack: no preference; although, the ASRock board looks better.

Chassis
This NAS will sit in a Storinator. I’m buying the standalone XL60 chassis which can hold many more than 60 drives depending on the configuration; therefore, I’ll need many more drive slots; like 72 or more.

I currently own two LSI 9305-24i cards and two Adaptec AEC-82885T SAS expanders. Combined, those can handle up to 88 drives not counting any SAS or NVMe on the board itself.

1 Like

image
From your hyperlink

Thanks! I’d looked up a video by Serve the Home where he said it was a 4-channel chip.

Just looked up an article on it, same thing:

There is one caveat, the AMD EPYC 7232P is a 4-channel memory-optimized SKU. That means that one can still populate all eight channels and sixteen DIMMs of DDR4-3200, but one will only see half the memory bandwidth of higher-end parts. For this market that is more cost versus performance sensitive that makes sense.

Source: AMD EPYC 7232P Review Hard to Buy but Solid Part

But I guess I misunderstood.

Is there a way to see how many banks I can use or if it’s okay to buy 2 batches of 64GB (16GBx4)?

Each chiplet has a bandwidth cap, that model probably uses less chiplets. IO die should be same though. it’ll only matter when it matters, meaning workload dependant. You should be fine to get multiple sets of the same RAM; I’ve heard people say otherwise in SOME cases, but…
If… just buy the RAM you want and if it doesn’t work, somebody besides you fucked up, return it, try again.

1 Like

Interesting. I think the IOD in every EPYC 7xx2 are identical, so you get 8 memory channels, and that is also what the AMD documentation I’ve encountered hints at.

AMD documentation also states the Rome Infinity Fabric frequency is 1467 MHz (uncoupled against memory frequency, unlike Milan) and transfers 32 bytes per clock on reads (16 for writes), which makes ~42GiB/s read speed per CCD, or ~21GiB/s write speed.

Apparently there are 2 CCDs in a 7232P so that totals 84GiB/s of IF bandwidth into the IOD, which is about half of what I’d expect from the 4/8 CCD variants, so that sounds about right with what STH is saying.

I think that also means the PCIe bandwidth is capped at that rate, too - but I didn’t find anything explicitly stating that. I can’t see how the combined PCIe+memory transfer rate could exceed that, unless doing DMA between memory and PCIe bypassing the IF (maybe possible on some filesystems with RDMA?).

How that affects the Numa/NPS setting, I can’t figure out :slight_smile: - presumably NPS1 and NPS2 would work, but not NPS4 because there are only 2 IF links to CCDs - no idea.

In any case, for a NAS, even maxed-out with SSDs, I’d imagine that amount of bandwidth to be plenty (unless maxed-out with NVME gen4 SSDs, then it’d hit a bottle neck after about ~6 devices - assuming bandwidth is shared between NVME and network tasks).

3 Likes

Yes; hover over the :information_source: icon next to “Per Socket Mem BW” for more details.

Dual rank will allow slightly more memory bandwidth in certain workloads. Single rank for the same capacity will have lower power consumption under load. There’s unlikely to be a significant difference in either of these things for your usage.

$600+ is normal pricing for server boards at this level, yes.

They’re good. In general, the vendor QVL for boards is a good place to start; ASRock’s is here. Supermicro brands their own RAM, but you can get to it via the Tested Memory link on the board page.

One of my recent purchases for Kingston RDIMMs was from Provantage, who is a reseller not a retailer, so individual shipments come from distributors in various locations. They’re optimized for business logistics but happily accept individual orders. In my case, they drop-shipped from a Kingston warehouse.

Which is 46.94GB/s (you’ll see why the unit switch in a moment) and would be 93.88GB/s for two CCDs. Given that AMD specifies 85.3GB/s for these SKUs, and @oegat mentions running 3200MT/s modules on a 7252 (same category) here, I suspect the infinity fabric is clocked at 1333MHz for 42.66GB/s per CCD. That number also aligns with two channels of DDR4-2667 in the same quadrant, which is another number AMD says it’s optimized for. I’ve had a hard time finding info on the internal details though, and never obtained one myself, so this remains speculation.

The Infinity Fabric is the switch tying together all of the high bandwidth I/O, including the UMC and PCIe RCs. RDMA isn’t required, plain old DMA does PCIe ↔︎ SDRAM communication across the Infinity Fabric — that’s what the IOMMU mediates. But I’m not sure what AMD means: when they speak of memory bandwidth, are they just referring to the reduced number of CCDs limiting the aggregate UMC ↔︎ CPU bandwidth, which is exactly what most workloads and benchmarks like STREAM care about? Or did they also limit the switching capacity of the Infinity Fabric?

So far I haven’t come across anything suggesting the fabric is limited in capacity, other than potentially being 1333MHz instead of 1467MHz. On the other hand, I also haven’t seen solid numbers on what the switching capacity actually is. It’s something I’m very curious about, if anyone should ever come across the information.

Fortunately those details are published in the NUMA guide (pg 7): just NPS1 for the 7232P, NSP0 and NPS1 for the 7252, 7272, and 7282. The implication is that the active CCDs may be on the same half of the SoC.

Agreed, I’d expect these component choices to do quite well. It only has to serve ~2GB/s with both 10GbE ports populated after all :smile:

3 Likes

sorry I should have been clearer there - I was thinking what type of NAS workload/protocol would actually allow DMA from a filesystem on a disk device to the NIC, certainly not ZFS :slight_smile: but maybe SMB Direct+RDMA?

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.