There’s an entire topic discussing that in case you want to give it a go:
Seems like some folks managed to get 4x48GB working without issues at 5200~5600MHz out of the box. 48GB dimms seem to be easier to get working than 32GB ones.
I actually haven’t seen many positive LGA1700 results with 128GB, most can’t even go past 4400MHz, and some get stuck at 3600MHz (at this point you’d be better with DDR4).
I googled around and found some reports of folks with 14900k’s running 192GB at 5200MHz, so it should be doable nowadays.
Btw, if you’re working with ML, wouldn’t a Ryzen be more interesting than Intel due to AVX-512? Numpy gets a nice oomph out of it.
Nah, it’s pretty easy to get 2x3090s going without problems, and I say that as a owner of a couple of those.
No 4090 is going to beat 2x3090s, both from the extra vram and compute. Can both run bigger models (not doable at all with a single GPU due to the lack of vram), or double the batch sizes for double the training speed for models that fit within a single GPU.