Does anyone have access to Epyc 7773x for a quick benchmark?

Yeah that’s way more expensive then a dual Epyc system that you could build of Ebay, yes the price is in line with new Epyc’s but why buy new when the second hand one will work just as good. Example dual 7T83’s would run you 5k + around 1.7k for nice 2rx4 3200 ram 512gb, board about 800 a very nice case either 2u or 4u 1.5k so in total ballpark 9k (you could probably get it for less if you are not in a hurry and are willing to wait a bit). Or you could go with dual 7H12 and get nearly as good of a performance for 1.4k less.

And let me be clear I am not talking production ready critical infrastructure but something that I would run at home and planning to run at home to replace my r930’s and t630 that I currently have.

It’s not exactly correct to compare price of a new last system with pre-previous gen of Ebay. Also it’s 256 cores - not 128. New modern dual Epyc costs about the same as Apmpere as I said.

Also nobody knows how it performs and there is no way to find out, but it has unified memory architecture for each 128 cores - so it may well outperform four 7T83.

Ohh no doubt that it will be more performant especially for bladebit which is hyper sensitive to bandwidth and latency, if I had an unlimited budget I would go for Ampere Altra 100% but since that its not the case I will have to go for either dual 7H12 or 7T83’s or dual 8373C or 8375C

Yup. You’d need 15-20Gb, not 25Gb as you stated earlier. You’re also missing the possibility of having the plotting system be harvesting direct attached HDDs, so the NIC may be irrelevant.

The last 512GB of ECC DDR4 3200 I bought was $2200.

  • You’re worried about cost but want to build a complete second system instead of dual sockets in a single system?
  • NUMA-isolated processes in a multi-dual socket system are literally equal in performance to a separate system.
  • Higher compute density per rack unit.
  • The system would presumably do other tasks when not plotting, which would be the majority of its lifetime.
1 Like

So it’s either 25Gb interface or link aggregation right?

2U Supermicro with twin nodes (and they have 4 nodes) is as dense as dual CPU, costs the same but plots faster and you don’t need to bother with complex things, affinities and link aggregations.

That is for small farms that don’t need Epyc plotters whatsoever unless farming is your full time job.

The most optimal setup there would be a 10Gb NIC on each NUMA.

No, they don’t.

No, they don’t. If anything they plot slower given you’re more constrained on cooling. They certainly would be louder.

Instead, you need to deal with maintaining two complete systems instead of just one with 2x the performance (and the ‘added complexity’ of simply running a second instance of the plotter on the other NUMA).

My 2.4PB attached to the very system that plotted to them disagrees with you.

1 Like

What server are you using with dual NICs attached to different CPU sockets?

Why is that? Both nodes are 2U. Did you even check what are you discussing with me here?

That is a small farm that 32 core Threadripper plots out in 3 months. Doesn’t warrant multiple Epycs at all.

There’s no sense in my arguing further here. I’ve spent weeks tweaking and tuning systems for Chia plotting / evaluating cost/benefit for recommending configs to system builders (Supermicro) as part of my job at Intel (I was also testing on AMD for competitive analysys). I know several of the Chia devs and have worked with them on bugs / performance optimizations. I hold most of the plotting speed records. I’ve ‘been there, done that’ WRT most of what you’re only hypothesizing about. Learn from it or don’t.

hey malventano, assuming the system (512gb per socket, 1 SSD/1 NIC per socket as well) how would one set up two bladebit instances for individual socket?

bladebit cli doesnt really specify this so I’d assume there are additional steps done through the terminal command line? if so what are they?

You can assign bladebit process to specific CPU or even specific CCD with NUMACTL command, however SSDs and NICs can’t be “assigned” - they must be physically connected to specific PCIe slot connected to specific CPU. Meaning you will have to use additional PCIe NIC for one of the CPU.

1 Like

I do have separate NIC and SSD so that much isn’t an issue.

Could you elaborate more with how the command line starts and what it looks like?

I have tried --no-numa ./bladebit -c -f etc etc and they dont work

You must launch bladebit with --no-numa --no-cpu-affinity or else it will override numactl. To split nodes you’d launch one instance with numactl -m0 -N0 and the other with -m1 -N1.

hey thanks for replying.

I have checked -h (help) for bladebit, it has -m for --no-numa and --no-cpu-affinity, but would it understand -m0 or -N0 command? or is that strictly for numactl?

Basically, I’d run something along the lines of:

numactl -m0 -N0 ./bladebit -m --no-cpu-affinity -c -f… etc for one terminal instance
numactl -m1 -N1 ./bladebit -m --no-cpu-affinity -c -f… etc for second terminal?

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.