I added a 4090 yesterday for rendering in Redshift and was exited to see there combined performance. I cant seem to get them both working at the same time or even show up in Geforce experience, they do show up device manager! there is only one available driver for both cards now and its 5.2225
any guesses how to do this? the rest of the internet says that it could be done but i have no clue!
you cant just plug a 4090 into a build with a 3090 an expect them to work together.
nvidia for one no-longer supports sli and recently removed support form the driver for legacy systems.
also neither card is matching hardware. the best option for sli was have 2 cards of the same make and model.
lastly there is no hardware bridge to connect both cards physically.
I would just ignore GeForce experience it probably is not programmed to handle having multiple cards from different generations.
I don’t know very much about Redshift specifically, but I have seen that Multiple gpus from different gens should work in it. You might have to change some settings in Redshift to get it to use both of them or you might need to update it if the 4090 is the one it cant detect.
And you don’t need to worry about sli I don’t think its relevant for rendering software.
It appears that Redshift can split up its workload and render multiple frames, one per GPU. So different GPUs can likely be mixed with the OPs workload (and I’d assume if the OP is spending this much money, they know what’s possible and what’s not).
Yes! Redshift can be configured to use all compatible GPUs on your machine (the default) or any subset of those GPUs. You can even mix and match GPUs of different generations and memory configurations (e.g. 1 GTX TITAN + 1 GTX 1070). Redshift supports a maximum of 8 GPUs per session. Using a render manager (like Deadline) or using your 3d app’s command-line rendering, you can render multiple frames at once on systems with multiple GPUs. This can help ensure that the GPU resources are used as efficiently as possible. For more information on hardware considerations for Redshift, please read …
took a bit of digging but it looks like im completely wrong on all counts on this one.
apologies guys.
but i also found
One important difference between GeForce GPUs and Titan/Quadro/Tesla GPUs is TCC driver availability. TCC means “Tesla Compute Cluster”. It is a special driver developed by NVidia for Windows. It bypasses the Windows Display Driver Model (WDDM) and allows the GPU to communicate with the CPU at greater speeds. The drawback of TCC is that, once you enable it, the GPU becomes ‘invisible’ to Windows and 3d apps (such as Maya, Houdini, etc). It becomes exclusive to CUDA applications, like Redshift. Only Quadros, Teslas and Titan GPUs can enable TCC. The GeForce GTX cards cannot use it. As mentioned above, TCC is only useful for Windows. THe Linux operating system does not need it because the Linux display driver doesn’t suffer from latencies typically associated with WDDM. In other words, the CPU-GPU communication on Linux is, by default, faster than on Windows (with WDDM) across all NVidia GPUs, including GeForce and Quadro/Tesla/Titan GPUs.
it may explain why one of the cards isnt showing up.
the op’s being rtx cores that do support tesla.