Rtx 2080ti vs sli 1080ti

Anyone know the preformance difference of Dual SLI on 1080ti’s vs a single RTX 2080ti? It seems more cost effective to get dual 1080ti’s and run them in SLI. Especially since 2 of them are the same price as a 2080ti. Now as for power consumption… Ive looked for benchmarks but cant find any. Any thoughts?

As a multi-GPU owner (crossfire Vega 64) - multiple cards often/mostly don’t work properly. And even if they do, you often need to disable various graphical options (which is usually undocumented), sometimes get negative scaling (Hello, Ghost Recon Wildlands - which has “official crossfire support” that performs worse).

If you have the budget for 2x 1080TI, put a little more away for a 2080 TI.

If you want multi-GPU for shits and giggles or the curiosity (i bought my second one because it was on sale cheap and i mined a bit with it), go for it. But just don’t expect it to be anywhere near as good as a single card that performs better, because it just either doesn’t work, breaks the game or performs worse as often as not.

You’ll likely spend half as much time tweaking settings to make the game work as you will playing the damn game :slight_smile:

In my experience, when crossfire works, it’s great. But way too many games just don’t work with it.

2c.


Basically in games that don’t support SLI, you’re looking at roughly 20% less performance over the 2080 ti. In some cases worse due to SLI scaling actually reducing performance.
Best case when comparing 1080 ti sli and 2080 ti with a 1080 ti,
The relative performance will be 20% for the 2080ti over a 1080 ti and 100% for the 1080 ti SLI over the 1080 ti. That’s only for a few games.

In reality if the game works well with SLI, you will get better price to performance with 1080 ti SLI but for most games, a single 2080 ti will out perform due to the nature of most games not supporting SLI.

It’s really down to your preference, We don’t know what future titles will be like but currently, you will see better performance running 1080 ti SLI in maybe 1 in 5 AAA games.
If you already have a 1080 ti, it’s definitely the cheaper option if you need better performance in games that you know support SLI. But for the most part, neither option makes cost effective sense. You might be better off waiting 6 months to a year to see how the market adjusts. Both 10XX SLI and 20XX fail to make cost effecting sense to buy for more people, even if you need better than 2080 TI performance.

1 Like

As an example of negative scaling (GR: Wildlands):

My 2700X with Crossfire Vega 64 struggles to do 50 FPS in the benchmark on 2560x1080 (Ultra). I get wierd graphical glitches from time to time, shimmering textures, etc. GR: Wildlands has “official” crossfire support. GPU utilisation report from the benchmark with crossfire is something like 37%.

It runs better with a single card (crossfire manually disabled in driver’s game profile).

WItcher 3: frustrated me no end, got literally 5-10 FPS in crossfire until i disabled some graphics options (which were not obvious, i had to google it basically), at which point it is buttery smooth in 2560x1080. But it pretty much was anyway with a single Vega. I just get less fan noise.

Borderlands 2: works great in crossfire. GPUs sit on 300 mhz, essentially idle. No tweaking required. But… again… a single card runs that game easy anyway.

Doom 2016: doesn’t support crossfire at all last i checked (runs fine on single card anyway).

General trend seems to be that modern games often don’t work (Vulkan and Crossfire aren’t a thing it seems).

The games that do work with crossfire are either old enough that they run just fine on 3 year old hardware anyway, or very rare.

edit:
I know, SLI and Crossfire are not exactly the same, but from what i understand, the level of support is roughly the same… in fact last year AMD was pushing crossfire for the Polaris cards, whilst Nvidia was trying to kill it off and restrict SLI to benchmarking only (was even talk of an enthusiast key to unlock SLI) on top end cards only. So moving forward crossfire support may actually be better. And its pretty dismal…

I am just learning tensor flow and trying to GPU accelerate some simple tasks. 2x1080 ti is the best bang for buck in CUDA. The fact that CUDA can do:

int i = blockIdx.x * blockDim.x + threadIdx.x;

and float calculations is so freaking cool, I don’t know what to say. … … …

2 x 1080ti = 7000 cuda cores * ~1 million calculations/s = poop.my.pants(soiled=true, emoji=“OMG”.format(sunglasses))

NVLink is just trying to get over PCI-E bandwidth limitations at this point. If both your GPUs are on electrically x16 slots and you run a Vulkan game with explicit multi-GPU, SLI/NVLink is moot point.

1 Like

Not true, Ashes of the singularity supports multi GPU rendering through DX12’s multi gpu scaling implementation of which relies on the PCIe bus. In the case of the RTX 2080 ti, scaling isn’t great because the PCIe bus is saturated. PCIe 3.0 16x is a bottleneck for top end dual card rendering over PCIe. While it’s now possible to do, it is not going to be viable for long on the current generation of PCIe. What needs to happen is either a new generation that can dramatically increase the bandwidth available is released in future chipsets (Unlikely at the scale of change required) or Nvidia and AMD adopt dedicated channels through extra hardware of which Nvidia is already doing with NVLink.

I think the point was that neither Vulkan nor DX12 explicit multi-GPU use NVLINK, whether it is there or not.

(as per:)

I prefer Vulkan explicit multi-GPU over DX12 explicit multi-GPU anyday.

2 Likes

I was stating the current implementation of explicit multi-GPU is flawed in that the current hardware isn’t capable of providing the bandwidth without a huge overhaul on future designs, beyond what is considered cost effective.

Doesn’t matter what you prefer, it matters what studio’s chose is the viable option. Vulkan is a very difficult language to use and the implementation on the gpu side has not matured. Not to mention doesn’t fix the issue of prohibitively thin bandwidth.

DX, Mantel, OpenGL and Vulkan are just standards. It’s an interface standard, provided to allow for both GPU manufacturers and game developers to implement their code with compatibility with other software. There is nothing stopping Nvidia from changing the implementation of explicit GPU rendering to communicate over the NVLink standard. Communication is communication, it doesn’t matter which path it takes as long as it makes it from point to point without bottleneck. NVLink is the only standard currently available capable of pushing the bandwidth needed for SLI or explicit multi-GPU on cards with performance greater than an 1080 ti. There is no way around it.

Either a new PCIe standard needs to be developed with that a N/S bridge capable of handling the bandwidth or an alternative to communicating over PCIe needs to be developed.

They are way ahead of you:


2 Likes

Curious to see how the new boards cope with 300 watts per slot vs. electrical noise on the bus, etc.

1 Like