L1 Benchmarking: The New Intel B580... is actually good!

I was just reading your material in the past couple days haha

Just wanted to say thanks for all the benchmarking and writeup that you have done so far!

4 Likes

The card up here is currently slightly cheaper priced compared to a 7600XT 16GB.
So in that regards it looks like a pretty good alternative option for budget gamers.

2 Likes

Just found a video of someone testing with rebar on and off in a pcie gen 4 system. seems like while the average does take a slight hit, the 1% and 0.1% lows are what really are impacted. So between the two of pcie gen3 and lack of rebar, I think having an older system would be a definite no-go performance wise.

https://youtu.be/ONLGF9ZzWZ8?si=ql8_D1UKruvF5UZZ

2 Likes

Anyone get a chance to fool around with the B580 and Stable Diffusion? Performance? Compatibility?

Does Wendell or anyone else here know/have heard anything about the possibility of linking two B580 cards through PCIE (two cards in the same board), i.e. something similar to what NVLink or Crossfire do or used to do? And if not, is Intel working on enabling this?
While it was disappointing to some that the B580 uses only eight PCIe4 lanes, it also means that (in theory), two B580s might be linked in boards that have two 16 physical / 8 electric PCIe4 slots, which some AM5 and LGA 1700 boards have. Having 2x12 GB VRAM could enable LLMs that won’t fit in less memory, and not everybody can afford a 4090. Two B580 at MSRP cost less than half of a 4090. And yes, maybe just wishful thinking on my part😀.

1 Like

While that does make sense and I have not heard anything about it, you could always just get a used 3090, as they are in the cheaper end of things compared to other 24gb cards. Another option would be a 7900 xtx if you are looking for something closer to 4090 performance with the 24gb of vram.

In both cases, you wouldn’t have to worry about dual GPU restrictions. And in the case of the 3090, you can SLI them to be able to double your memory bandwidth if you end up wanting to do that later on.

I just don’t think living on the bleeding edge like that is worth it unless you are willing to experiment and test that for yourself. Probably also worth making a new topic in the appropriate part of the forum.

2 Likes

I don’t think there’s anything stopping you to do so on Linux. You should be able to just slot in both GPUs and tell ollama/pytorch/whatever is your framework of preference to use multiple GPUs and that’d be it.
Not really SLI/Crossfire since those are meant just for games, nor NVLink since this one is meant to provide a faster interconnect between devices.

Just being a bit pedantic, but it’s not SLI since this tech is just for games, but rather NVLink as the interconnect. And it would double the actual memory bandwidth, but rather the bandwidth across the GPUs instead of being bottlenecked by PCIe speeds.
For LLM inference I don’t believe it’s that relevant when you only have a pair of GPUs, but this could net to some minor perf uplifts when training/fine-tuning.

1 Like

No, but IPEX-LLM supports multi-GPU setups out of the box, so there’s at least that.
I haven’t really seen very comprehensive Benchmarks of the performance scaling on Alchemist or Battlemage though.

1 Like