New 2X GPU system for VFX / CG

Hi,
I am looking at building a new computer mainly for CG work.
Should have 128GB ram and a fast processor.

But I think mostly I will need to have ability to hook up two GPUs as my main rendering tool is Redshift which relies on GPU power, can also utilize out of core rendering, so that is really ideal, the VRam don’t need to be pooled via NVlink or such.

I think I will go with Intel i9 13th gen 13900k but what are chances or running DDR5 128GB on decent speed? Not sure how much the ram speed matters in my case though.

Then for GPu at first I would look at buying a single RTX4090 and at later time second one.

I assume 1500W PSU minimum for dual GPUs, but is that even going to cut it? It has to because my wall outlet will not appreciate much over that…

My question at this point would be, do you think consumer Core i9 based system is going to cut it, or, would I be better off looking at some kind of HEDT based system like Threadripper?

Reliability and GPU power are they key factors instead of gaming performance.

Going dual GPU, I would go Threadripper 7xxx, EPYC workstation or Xeon workstation.

The reason for this is because of the limited number of PCIe lanes on the consumer platforms. You can go Ryzen 9 or Core i9, and that would allow you either 1x16 lanes + 1x4, OR, if you are lucky, 2x8 lanes.

Even if you make these PCIe v5.0 (and no GPU cards do 5.0 yet) you lose the x16 lanes. This can have a negative impact.

That said, here is a PCPP for a 13900k build, that stretches the platform to the limit. 7900 XTX chosen because 4090 is sold out, 4080 might be a good replacement.

PCPartPicker Part List

No you will not buy a 4090 unless you want to go to dodgy second hand sites:

1 Like

I’ve heard of 1 person who had 128gb DDR5 running at 5600MHz with XMP and no further messing around. Not sure how likely it is to happen to others.
When it comes to AM5, we have a thread about this that you could check.

You could power limit the GPUs and do as you please. I have 2x3090 PL’ed at 275W each on a 850W PSU, it even has some leeway still.
If you don’t want to do that, grab one of those 450W 4090s (instead of the 600W ones), 2x450W = 900W, give another 300W to your CPU and rest of setup and you should still have some overhead.

If 8 lanes are enough for you, you won’t need more than 256GB of RAM, and is happy with just 1 or 2 NVMes, I see no reason to move to HEDT.

1 Like

Thank you so much, super helpful. Especially about power limiting. That is really reasonable step to take. How do you do that in practice, by the way? Afterburner?

Actually I went with AM5 on this one, and will do with just 64GB memory, have to find workarounds to limit the memory use when doing CG stuff, proxies and such anyway, although doing some complex sim / fluid stuff actually have hard requirement for memory.

My path to new PC is basically reusing my RTX 3090 and then when prices come down and availability increases would get 4090 to act as a main card and move 3090 to the bottom slot.

I know the lanes will be split but I wonder how much that matters when doing things like rendering in Redshift and V-RAY? I get the feeling that anyway it wont saturate the lanes, and having two cards in there is just enormous boost when the cards are doing that kind of stuff.

If you’re on windows, then I guess that’s the software most people use. Since I only use linux, I use nvidia-smi for that.

Worse comes you can just upgrade later on.

From what I found in makes almost no difference in those applications.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.