AMD Vega frontier edition announced

haha maybe not in the high tier, but keep in mind not everyone wants to pay 800 dollars for a shill80 ti either. Personally, I play a lot of source games, league, emulators, and I stream a lot. Do I need a nvidia GPU? Fuck no. Not enough vram for my price point (3GB VS 8GB) and the drivers will be nuked from orbit in a year when the next series comes out.

Now, why would AMD kill the market their GPU's are basically made for? Good mid tier cards that anyone can use for anything. Nvidia makes the most expensive thing they possibly can with as much power as they can. Remember when the 1080 released and it was double the titan X? What is AMD supposed to do about an arch that has been refined and refined since the 800 series got skipped over? At that, why make ridiculous shit that no one can pay for when you can produce for the mass market?

They aren't out of gaming at all. No one needs an 8GB vram card for doing email and youtube. The Fury X is still around and the AMD cards are just as good as the mid spec NV's and are CHEAPER for more. Thats the point.

If they were done with gaming all we would have are tuner cards, sound cards, shitty SSD's, and APU's.

And zen.

2 Likes

The pro.radeon website linked in the topic start has 1 benchmark/comparison vs a titan xp.

All whatever it is, as long as there is a market for computer modellers like @weskie then we are safe from assuming that "GPU, "Video card", "Graphics card", "Video Processing Unit", etc, equals the general consumer (gamers) market.

1 Like

This arstechnica article says that there are two V100 variants coming to desktop. One, is low-profile and expected to be trimmed down. The other is a 250 watt version of the GV100 which is a 300 watt accelerator.

AMD is going to have some hurdles to get past in the deep learning space before these cards can really be impactful. Primarily, the fact that a lot of deep learning software is written using Cuda libraries from what I've heard.

1 Like

Yes. And if they would have had numbers from the Quadro P5000 then it would have lost badly. The latter scores 150+ in Catia. The 135 of the Frontier Edition doesn't look much impressive then does it. Even a Quadro P4000 is ~125. It looks even worse in SolidWorks, Vega FE gets left in the dust by a P2000.

https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Pascal-Quadro-Roundup-P2000-P3000-P5000-Tested/General-Compute-Perform

No need for Volta, Pascal is already way more powerful.

Also, that Pixel fill rate of the Frontier Edition looks low. The ordinary one year old 1080 beats it easily. Vega FE has 90 Gpixels/s? That's a whopping 5 more than a GTX 980. Not Ti, plain old 980. The 1080 takes it up 111. Lets not mention the Ti.

I am still wondering where this Vega FE sits in a full Vega product line-up. Looking at the current available marketing material it seems to me this Vega FE sits more or less in the same space in the spectrum as the Titan XP.
I get the feeling AMD wants to make "professional" use (like data science, product design, etc.) more available for the home user, professional freelancers and very small businesses.

some quotes from AMD:

"Developers can now tap into the power of Vega to do machine learning algorithm development before deploying it to massive servers equipped with Radeon Instinct accelerators."

"The Radeon™ Vega Frontier Edition, combined with our ROCm open software platform, paves the way for pioneers to continue pushing boundaries in fields like AI."

I am still expecting an AMD skew that is aimed at larger businesses and enterprises to compete with the Quadro GPUs.

When pricing will be announced for the Vega FE we will know for sure.

There will be an AMA on reddit tomorrow if you want to ask questions directly to Raja Koduri:

I’ll also be hosting an AMA on Reddit this Thursday at 2 pm PST – please join me at reddit.com/r/AMD.

Looks like it needs to be cheap to compete, as the Quadro cards are way better. The Nvidia software stack is way better too. Big part why they are so popular.

The Vega FE is just a Fiji with higher clocks according to the numbers. The Pixel Fill rate is even worse than a Fiji at those kinds of clocks. Yes, worse. Is it based of base clock? Or are there less ROPs? Memory bandwidth is worse too. Yay.

Why is Vega unimpressive? That seems to be the question to ask.

1 Like

Doesn't HBM2 change the game entirely regarding the utilization of bandwidth? I am by no means an expert and I get the skepticism, but I remain hopeful :slight_smile:

From the frontier website:

The state of the art memory system on Radeon Vega Frontier Edition removes the capacity limitations of traditional GPU memory. Thanks to automatic, fine-grained memory movement controlled by the high bandwidth cache controller, Vega enables creators and designers to work with much larger, more detailed models and assets in real time.

1 Like

Compared to HBM1? Maybe, would have to look into that further.

1 Like

In the initial testing (early clocks and drivers) we did against the fastest competitive card we could get our hands on, we found Radeon Vega Frontier Edition to be more than 30% faster (1)

From the endnote:

(1) Testing conducted by AMD Performance Labs as of May 15th 2017 with the

  • Radeon™ Vega Frontier Edition graphics card, Intel® Xeon E5 2640v4 2.4Ghz 10C/20T, Dual Socket, 32GB per socket, 64GB Total, Ubuntu 16.04 LTS, ROCm 1.5, and OpenCL 1.2.
  • The Nvidia Tesla P100, was tested on a system comprising of Intel® Xeon E5 2640v4 2.4Ghz 10C/20T, Dual Socket, 32GB per socket, 64GB Total, Ubuntu 16.04 LTS with CuDNN 5.1, Driver 375.39 and Cuda version 8.0.61.
  • When using the DeepBench Benchmark, Radeon™ Vega Frontier Edition completed in 88.7 ms and the Nvidia Tesla P100 completed in 133.1 ms.
  • PC manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers. VG-9
1 Like

You can't compare gaming benchmark performance to raw compute performance. In computation performance the Fury X chews out the GTX 980 Ti, but gaming performance has way more variables that ultimately NVidia manages to do just as good with a lower TFLOP count.

This just shows who's better on the software side of things. Of course other hardware in the GPU maybe factored in as well.

I agree, Nvidia, is way ahead in the deep learning game. In a perfect world with money not being a factor, I would be building a complex to install a couple truck loads of these $150,000 servers when they are available at the end of the year/ beginning of 2018. Or, being realistic somebody might plan and prep to get a Pcie Volta when it becomes available "someday". But, a Vega Pcie card is said to be offered next month and if it gets priced right many will be catching that train.

well ok maybe but

you know nvidias pro hbm2 product they licensed tech for from amd costs $6k right? Amd is crazy to leave that kind of money on the table

2 Likes

Vega frontier edition is apparently now available in stores.

  • $ 999,- for the air cooled version (300 watt tdp)
  • $ 1499,- for the water cooled version (375 watt tdp)

from the PCworld article:

In the given time we had to run tests, we saw the Frontier Edition outscore the Titan Xp by 28 percent in Catia and Creo to 50 percent in SolidWorks. We also ran Maxon’s Cinebench, a popular OpenGL benchmark, in which the Frontier Edition was about 14 percent faster. The numbers echo what we already knew about the Frontier Edition, but this time we could see the performance demonstrations live.

Official AMD announcement

"hands on" by PCworld (autoplay warning)

Does anyone know the clock speeds for the Air and Watercooled versions? Both air and watercooled cards list 13.1 TFLOPS of performance, despite the watercooled version having a 75 watt higher TDP as well as a $500 price premium over the aircooled version.

Higher sustained boost clocks, less noisy and so on.
Other than that we don't know much, PCBs are probably identical.

Have been searching for a while now as I wondered the same, but this is suspicious hard to find info at the moment

1 Like

I couldn't tell you the difference between the two versions but since we know how many cores it has and the TFLOPS we can calculate clock speed.

To get 13.1 TFLOPs from 4096 cores you need about 1600Mhz

Would assume WC is higher or maybe more sustained boosts

Still is odd the price delta

Still odd that they won't list the clockspeeds anywhere. I don't remember any AMD or Nvidia launch where we didn't know all of the specs prior to being able to buy the thing.

1 Like