The VEGA 56 / 64 Cards Thread! General Discussion

I´m having an issue with this, this was reason #2 over what I was doing my assumption:

Reason #1 was that I tested myself, I can´t remember how many months ago, the Goosberry Benchmark on my 390x and I was hitting -out of vram-.

Now, this is crazy, I did the test again and I can render it on my 390x. GpuZ shows full usage of my videocard memory and above 1GB of dynamic vram (!?).

Amazed I have tested going back to Blender 2.77 and even Crimson -previous to Relive version- Nov.´16 and it still do the render. (??)

Something is weird here, maybe it´s a good one about OpenCL support, but… I´m a bit confused now.

At the moment I will edit my post making it clear that maybe that´s not an appropiate test to evaluate the HBCC functionality for compute based rendering.

Well that’s a little disconcerting? I haven’t tried it without HBCC on yet. Maybe we need a bigger render project to try?

Also those Vega are twice faster than a 390x.

I think it would be better. At least if Blender, Radeon GPUs, OpenCL, Bill Gates or whatever that is making the difference can do now out of core rendering with Cycles happen, testing a very large scene could maybe give a difference between HBCC on or off in render times.

Well I’m going to try it without HBCC on and see if there’s a difference. Back in 35+ minutes! :joy:

Take it easy Buddy! :slight_smile:

Taking it plenty easy!

Edit:

@Leo_V @Raziel

Well essentially no difference. Render time was 36:52.48 without HBCC enabled. Perhaps a larger sample is needed. I should note that Blender ate over 16GB of RAM vs the 9.5ish with HBCC, not sure if that was an anomaly or not.

2 Likes

Seems like you didn’t optimize the CPU render benchmark for GPU’s.
That’s a really slow render time.

Try increasing the Tile Size from 32(CPU) to something more suited for GPU’s 256.

2 Likes

The NOT Vega result

It clearly says this is a CPU result :(. Im using AMDGPU and I dont think my blender GPU test are working either. I mean it renders but the CPU temp goes up not the GPU. However the bigger block size from a GPU setting seems faster than the small CPU sized blocks.

I stopped test when I clearly saw it was the CPU doing all the work and under linux I run 3.75 GHz under windows I can do stock cooler 4 GHz. Next purchase should be an AIO cooler but my 1700 keep up with a RX480 anyway

Where?
This is a GPU compute result as set everywhere visible in the viewport.

You may have to setup your GPU’s to be selected in the User Preferences. Also make sure you are using a very new blender build. I’m on 2.79.1 Compiled this morning from the git repo.

With Dual Rx580’s I’m now achieving ~17:50 render times for the Cosmic Laundromat scene.

If this where a CPU result it would take ~2hours
I Think @Steinwerks hasn’t setup his Render job correctly for a GPU run, since originally this particular blender benchmark was setup for a CPU task. But it can very easily serve as a GPU compute test.
Simply set it to GPU Compute and set the Tile Size to 256x256.

Regarding Tile Sizes (From an old render somewhere)

squarespace

1 Like

Dont worry about me catsay. Im just a tinker and if I was into blender I would be googling like a mad man to work it out. It was just for shits and giggles for me to test for my 1700 did vs RX480 on blender.

I dont actual use blender other then some work on video encoding and editing and trying. Im to ugly and introverted for youtube. To lazy to be an editor…Man its hard work to do well.

I was messing with the BMW27 benchmark. CPU tile was 32x32 and GPU 256x256 both seem to be on the CPU anyway . Seemed the 256x256 on the 1700 was goin to be faster but I stopped when I saw the gpu still cool. About 1m into a few minute render.

Interesting table however. Wouldn’t the GPU ram effect the table. Hell maybe the L3 cache on a CPU ?

You’re correct, all I did was set the render type to GPU on the benchmark as the initial test was to see if HBCC had an effect. Turns out it doesn’t, or at least not one that was expected (system RAM differences maybe).

@Marten

It was definitely a GPU compute render, my 1700X was sitting at less than 5% usage the whole time.

Yes I did, and no I do not. Perhaps in Maya, Cinema4D or others, but not what I use.

I wish I could test. Drivin me nuts…

@Steinwerks

“With or without HBCC, Vega 64 peaked at 8GB used with anti-aliasing disabled. With it enabled, and set to 4xMSAA, we can see that HBCC does have to step in, with GPU-Z reporting close to 11GB of memory used (see below) even though the GPU really has only 8GB.”

2 Likes

I should’ve used the experimental build, as I think I’m being limited by the release version. I changed the tile size to 256x256 and knocked a sadly tiny amount of time off of the Gooseberry render for a still-large 34:59.28.

Will grab experimental and try again tomorrow.

Well, here we go…


Will have to finish this later… stuff…
I have a flat surface chip, so that is pretty ok.

2 Likes

That is not flat surface that I know of. The flat “even” surface chips the die and HBM2 are in resin. Yours look to have no “filler” between chips.

What’s everyone using for thermal paste? I’m about to place my order for a whole loop. I have a tube of Kryonaut but not sure if I should bother using this or not, and might grab a tube of MX-4 because I believe it’s easier to work with.