Return to

The VEGA 56 / 64 Cards Thread! General Discussion



Also those Vega are twice faster than a 390x.

I think it would be better. At least if Blender, Radeon GPUs, OpenCL, Bill Gates or whatever that is making the difference can do now out of core rendering with Cycles happen, testing a very large scene could maybe give a difference between HBCC on or off in render times.


Well I’m going to try it without HBCC on and see if there’s a difference. Back in 35+ minutes! :joy:


Take it easy Buddy! :slight_smile:


Taking it plenty easy!


@Leo_V @Raziel

Well essentially no difference. Render time was 36:52.48 without HBCC enabled. Perhaps a larger sample is needed. I should note that Blender ate over 16GB of RAM vs the 9.5ish with HBCC, not sure if that was an anomaly or not.


Seems like you didn’t optimize the CPU render benchmark for GPU’s.
That’s a really slow render time.

Try increasing the Tile Size from 32(CPU) to something more suited for GPU’s 256.


The NOT Vega result


It clearly says this is a CPU result :(. Im using AMDGPU and I dont think my blender GPU test are working either. I mean it renders but the CPU temp goes up not the GPU. However the bigger block size from a GPU setting seems faster than the small CPU sized blocks.

I stopped test when I clearly saw it was the CPU doing all the work and under linux I run 3.75 GHz under windows I can do stock cooler 4 GHz. Next purchase should be an AIO cooler but my 1700 keep up with a RX480 anyway


This is a GPU compute result as set everywhere visible in the viewport.

You may have to setup your GPU’s to be selected in the User Preferences. Also make sure you are using a very new blender build. I’m on 2.79.1 Compiled this morning from the git repo.

With Dual Rx580’s I’m now achieving ~17:50 render times for the Cosmic Laundromat scene.

If this where a CPU result it would take ~2hours
I Think @Steinwerks hasn’t setup his Render job correctly for a GPU run, since originally this particular blender benchmark was setup for a CPU task. But it can very easily serve as a GPU compute test.
Simply set it to GPU Compute and set the Tile Size to 256x256.


Regarding Tile Sizes (From an old render somewhere)



Dont worry about me catsay. Im just a tinker and if I was into blender I would be googling like a mad man to work it out. It was just for shits and giggles for me to test for my 1700 did vs RX480 on blender.

I dont actual use blender other then some work on video encoding and editing and trying. Im to ugly and introverted for youtube. To lazy to be an editor…Man its hard work to do well.


I was messing with the BMW27 benchmark. CPU tile was 32x32 and GPU 256x256 both seem to be on the CPU anyway . Seemed the 256x256 on the 1700 was goin to be faster but I stopped when I saw the gpu still cool. About 1m into a few minute render.

Interesting table however. Wouldn’t the GPU ram effect the table. Hell maybe the L3 cache on a CPU ?


You’re correct, all I did was set the render type to GPU on the benchmark as the initial test was to see if HBCC had an effect. Turns out it doesn’t, or at least not one that was expected (system RAM differences maybe).


It was definitely a GPU compute render, my 1700X was sitting at less than 5% usage the whole time.


Yes I did, and no I do not. Perhaps in Maya, Cinema4D or others, but not what I use.

I wish I could test. Drivin me nuts…



“With or without HBCC, Vega 64 peaked at 8GB used with anti-aliasing disabled. With it enabled, and set to 4xMSAA, we can see that HBCC does have to step in, with GPU-Z reporting close to 11GB of memory used (see below) even though the GPU really has only 8GB.”


I should’ve used the experimental build, as I think I’m being limited by the release version. I changed the tile size to 256x256 and knocked a sadly tiny amount of time off of the Gooseberry render for a still-large 34:59.28.

Will grab experimental and try again tomorrow.


Well, here we go…

Will have to finish this later… stuff…
I have a flat surface chip, so that is pretty ok.


That is not flat surface that I know of. The flat “even” surface chips the die and HBM2 are in resin. Yours look to have no “filler” between chips.


What’s everyone using for thermal paste? I’m about to place my order for a whole loop. I have a tube of Kryonaut but not sure if I should bother using this or not, and might grab a tube of MX-4 because I believe it’s easier to work with.


Already about a thousand comments in just 2 hours.
I get some of my best meme’s from the WCCF comment section :slight_smile:


Funny you mentioned these two, they are what I used. I ended up redoing CPU block twice, soon to be third, and GPU waterblocks twice. I ran out of Kryonaut so used MX-4. GPUs are using Kryonaut and have great temps. The last CPU change I used MX-4 and saw no real difference - was not able to push my CPU though because of stability issues. MX-4 is cheaper and much easier to get. Kryonaut is much easier if you heat it first. Had I had easier access to Kryonaut I would have used it. Kryonaut has the highest W/mk value of any non-conductive thermal paste I know of, 12.5W/mk.

I would use Kryonaut if possible.