Intel just announced released their Battlemage B570 and B580 GPUs. Despite the fact that these are low end (mid range?) parts, something about this really excites me.
Especially the LE card design. It just… speaks to me.
But that leaves me wondering: What has Arc software support been like on Linux, especially more recently? I understand Arc drivers were terrible, especially on Windows, in the early (and not so early?) days, with even things like driver updates not working reliably. But on Linux, I believe this is not an issue, as it’s part of the Kernel/Mesa?
Curious if anyone here has first hand experience with Arc on Linux, be it with gaming workloads, or workstation/general purpose desktop/rendering workloads, and what it’s been like.
Would especially love to hear of peoples’ experience with Battlemage, if anyone here has one.
EDIT: Since Battlemage B580 has been released and is in peoples’ hands now, edited the title and would love to hear the experience of anyone who has these new GPUs.
Ive heard good things about arc alchemist’s driver support. so im not super concerned.
what im hoping for is that @wendell would do me the kindness of testing to see if battlemage can do folding at home. since alchemist lacks the double precision instructions required for F@H.
(note: i am hoping that wendell tests F@H compatibility/performance as part of the review. but i understand that this may not be feasible.)
Arc drivers on linux have been mostly flawless for me assuming recent kernel. My use case is different than most though. I suspect you might have some heart ache depending on what it is you’re looking to do.
This is a fun video that is more recent, but only showcases the bottom end card.
I expect the initial release wont be totally stellar and software support will take a lot more time on the linux side than it otherwise has for windows.
I thought I’d heard that Alchemist didn’t go very well on linux, something about firmware updates and buggy drivers. If that’s not true, I’d be very interested in Battlemage for linux gaming… Hoping we get a nice linux-oriented review at some point.
Darn, he couldn’t get Blender to work! Interestingly, I don’t see any Linux results for the A310 in Blender’s Open Data benchmark database. Though if one needs a GPU for Blender, Nvidia outclasses everyone else to a not-comparable degree anyway…
On a slightly related note, someone was able to get the B580 early and upload some Blender benchmarks. Sadly, these are under Windows:
Device Name
Median Score
# of Benchmarks
Intel Arc A770 Graphics
2141.36
8
AMD Radeon RX 7700 XT
2051.91
5
Intel Arc B580 Graphics
1817.36
4
Intel Arc A580 Graphics
1464.85
3
B580 is making some OK gains over A580, but nothing earth shattering here. I guess Arc was always pretty decent in the kind of raw compute that is needed in something like Blender. Though I would have hoped that the RT improvements in the architecture would have translated into bigger gains in Blender. Maybe that requires matching improvements in oneAPI or Embree to take proper advantage of it.
I agree those are okay gains, nothing to call home about. but i imagine that there are some gains to sus out in software updates and driver improvements. hopefully this launch goes more smoothly than the first one. because intel is looking like they could use the win.
I’m running three Acer Predator BiFrost Arc A770 in proxmox. One is passed to a linux server that runs/opens and closes multiple virtual displays for remote gaming network and works great. One runs a local AI and semantic search server with out a problem. The third I move around.
Intel did very bad on software support at first making these cards a third of the price of nvidia, but they learned their lesson and cleaned that up making these cards cheap and work great.
Other programs ive run with them:
Blender
Unreal engine
Gimp
davinci resolve
Houdini
This is an interesting article, but here’s the odd thing: Arc drivers have improved massively over the years since launch. Especially for gaming, on Windows. Yet that article shows him getting 74 FPS on Counter-Strike 2 @ 1440p. On Windows, people seem to get 2-3x more FPS.
Makes me wonder if Intel’s GPU optimizations for gaming for Windows have been making their way to Linux. Which kinda brings me back to the initial intent of this thread! It would be interesting to see some performance comparisons of the same games across Windows and Linux… but it’s hard to find people who care enough about this to properly test and document it.
Those benchmarks show something entirely unrelated: CPU workload performance between Windows and Ubuntu.
The question for this thread is about the differences in Intel’s GPU driver architecture and game-specific optimizations between Windows and Linux. We know that on the Windows side, there have been massive fixes and optimizations done for gaming. Many driver releases included major game-specific optimizations, often in the double digit gains, sometimes even in the low triple digits if something was particularly broken. Phoronix getting ~70 FPS on CS2 on Linux while people on Windows are getting over 150-200 suggests that the Linux driver might not be getting said game optimizations. But something else could be broken instead, too.
Thinking about this, it wouldn’t surprise me if Intel didn’t bother with game-specific GPU driver optimizations for Linux, given the tiny size of the Linux gaming market (and Intel’s tiny dGPU market share on top of that - they are losing money on these today so they are probably quite picky with where they prioritize investment.)
Here’s a related interview where Intel talks a bit about what optimizing drivers is like:
Phoronix just published a like for like Windows vs Linux comparison that includes gaming.
Gaming performance seems to be significantly better on Windows, as suspected. E.g.:
Dunno if this is their standard list and still dunno how this effort compares to what they invest on the Windows side. They likely have a lot more resources on the Windows side. I wish they would publish performance improvements on Linux like they do on Windows…
One of the things that I’m most interested in is the driver maturity for xorg vs wayland. Wayland simply is not anywhere near ready for me to use it, so I’m stuck with xorg for what atm looks like the next several years at least. However, I’ve heard some rumors of the new Intel drivers focusing only on wayland support. Is that true, or does the xe driver work fine on xorg as well?
Wayland support is very mature if you use Mesa drivers.
Basically, with Mesa, Intel only need to create a Vulkan driver and they get everything else for free including OpenGL 4.6 and top notch Wayland support.
Mesa still supports xorg just fine, but if your driver is in the Mesa ecosystem it is already supporting all of Linux. Of course, not always with 100% accuracy and performance, but good enough for 99% of the time. Who gives a damn if your game goes in 250 FPS or merely 220 FPS? Your 144Hz screen won’t know the difference in either case.