Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AMD Radeon RX Vega 64 and RX Vega 56 Review: Vega Burning Bright (anandtech.com)
95 points by robin_reala on Aug 14, 2017 | hide | past | favorite | 68 comments


I ordered a $499 RX Vega 64 from Amazon at 9AM this morning, mainly to show support AMD's open source Linux GPU efforts [1] and since there should be pretty good compute potential, but I will have to admit to being fairly disappointed about the power consumption and general performance (only a 30% improvement vs 2yo Fiji w/ a shift from 28nm to 14nm and much touted architectural improvements [2] vs Nvidia's 70% uplift launched over a year ago).

Interesting notes from the reviews - HBCC is not enabled by default but doesn't give much of a performance boost anyway [3].

According to Computerbase [4] (DE) primitive shaders are still inactive: https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-tes... (RTG engineer discussing that it should be automatically used in the future [5])

One interesting note is that while it looks like initial supplies have all sold out, early testing shows that OOTB Vega is pretty awful for mining right now. [6] Vega 64 is clocking at about 30MH/s at about 300W - a flash-BIOS'd RX 470 can get 28-30MH/s at ~150W.

[1] https://www.phoronix.com/scan.php?page=article&item=rx-vega-...

[2] http://www.pcgamer.com/the-amd-radeon-rx-vega-56-and-vega-64...

[3] http://www.guru3d.com/articles_pages/amd_radeon_rx_vega_56_8...

[4] https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-tes...

[5] https://mobile.twitter.com/ryszu/status/896304786307469313

[6] https://nl.hardware.info/reviews/7517/23/amd-radeon-rx-vega-...


> According to Computerbase [4] (DE) primitive shaders are still inactive

Yeah, though I think there is some opportunity for primitive shaders to be transparently extracted from vertex shaders. You could do this by following the operation dependencies back from gl_Position (or equivalent). This might not work in conjunction with geometry shaders though, or perhaps it could with some safety margin.

At least from what I gather about how they're supposed to work, this should be perfectly adequate most of the time, given that the discard semantics are very conservative (backface, off screen, not beneath a sample point, etc.).


And a non-flashed 1070 can get around 30 at 120W(ish)...

I guess the miners are hoping that a driver update will unleash the rumoured 70+MH/s. If it dosn't materialise I can imagine these being sold on fairly quickly, with that power consumption.


I wouldn't be surprised if the performance is artificially hindered until later driver updates, to prevent them from being gobbled up by miners.


why is it an issue if miners gobble it up? A sale is a sale no?


Miners tend to have very high rates of warranty returns, due to running the cards 24x7x365. In some markets manufacturers are trying to market mining cards which don't have display outputs, etc. and have short warranties. Many markets (US, EU) don't let you invalidate warranties just for using a product 24x7, so it doesn't work everywhere.


They could base GPU warranties based on usage, the same way car part warranties are based on mileage. In fact Samsung already is doing the same for SSDs: they have data write limit of 400 TB.


Miners are not the company's core base and have no particular loyalty, and will quickly fizzle out.

Basically, sure, miners get them quick cash, but if miners get their graphics cards and people wanting to play games or do other things don't, it'll make those potential customers unhappy and they may go to a competitor.


It's probably not an issue for AMD, as they sell either way, but it does push up prices for the gamer.


I don't think the rumors was ever true in the first place.


They do look pretty baseless and speculative, and certainly the "Frontier Edition" wasn't able to reach and spectacular rates.


Be aware that the open-source driver currently only supports headless mode on Vega. You have to use the proprietary driver to actually use the card for video output.


Only the headless support is mainlined. The open source code is already working in the amd-staging branch: http://www.phoronix.com/scan.php?page=article&item=radeon-ve...


Why are the AMD cards preferred by miners? Is it the price/performance ratio only? Can the 2x fp16 operations be exploited for mining?


Price/performance, yes. Specifically cost of entry/performance.

For perspective, a radeon 470/570 earlier this year would pay for itself in under 3 months even in countries with fairly high electricity costs.


Can you share the link for amazon? I cant appear to find it on amazon.


I think it's not visible since it's out of stock, and probably backordered into oblivion if that's even possible on Amazon. It's also out of stock from every brand on Newegg, bundle or standalone. Newegg.ca has some bundles still, but they include a high end motherboard and a CPU; I might nonetheless bite. I'll more likely wait until Vega 11, when they get the opportunity to tune this sucker and make the die a bit smaller.

If you haven't bought one already, it'll probably be a little while until you can buy one.


I did buy one on newegg but the price surprised me. I paid 599 for it (reference PowerColor model).


RX Toaster, they need to pull some magic out of the chip with upcoming drivers, otherwise this won't be the big return everyone expected, I feel.

I just hope the MI25 cards will be far cheaper than the $6k Tesla P100's and that the ROCm stack is good enough so for compute the architecture may still be a viable and very competitive option.


On the Open Source Linux OpenGL drivers (but not the proprietary ones), it is competitive with the 1080Ti at considerably lower MSRP and real retail price (once stocks recover from the most hyped GPU launch since... the Geforce 9800GTX? I don't even know). Power to performance ratios are also not well shown with the current shipping proprietary drivers; frankly I don't understand how AMD can't seem to get that team to deliver stellar day-one drivers for every launch, it is what it is I guess.


> On the Open Source Linux OpenGL drivers (but not the proprietary ones), it is competitive with the 1080Ti at considerably lower MSRP

Yaaay! Really, that's cool but it's a negligible fraction of the market. Unless Linux gaming becomes a market force overnight, it won't matter a lot, I think.

> Power to performance ratios are also not well shown with the current shipping proprietary driver

I doubt they can close the large power gap that the initial benchmarks show (compared to the 1080 and esp. 1070), but I'm hoping the power margin will get narrower and a bit of performance increase might compensate too.

Indeed, I am dampening my expectations, I'm most hopeful that in the high-end GPGPU compute arena they might not be too late and will provide enough Perf/W/buck to gain traction.


"Surprisingly, native FP16 operations are not currently exposed to OpenCL"

Given that these cards have potential to crash the price/perf of current deep learning solutions, this is more than regrettable.


This is the sort of decision which scares away deep learning people: when the only hardware advantage is not supported at launch, it tells us that AMD really is not serious about good support and that the stack cannot be trusted.


I regrettably agree there. I spent time doing OpenCL implementations of some of our stuff, but most recent decisions from AMD's side seem to indicate they'd rather get rid of it as well, in favor of some CUDA compatibility layers.

This isn't very encouraging to continue supporting AMD cards. Might as well drop them entirely until their CUDA compatibility is more mature.


Is openCL really competitive with Cuda now? I thought it was still basically nvidia or bust for the mainstream libraries because they all depend on cudnn


There are two questions there that are really completely distinct, even though you're conflating them:

1) OpenCL vs CUDA

In reality, performance of these is identical: i.e. writing the same kernel in OpenCL versus writing the same kernel in CUDA (this is almost always possible unless hardware intrinsics are used) gives the same performance, with NVIDIA's current drivers. The odds are pretty great the same compiler is backing them both.

2) cuDNN vs MIOpen

I haven't seen much benchmarks here. AMD has patches for most of the big frameworks but they're not upstream yet. The gist of the implementation seems state of the art, but no idea if the level of optimization is comparble. hipCaffe exists, but haven't seen any usable benchmarks so far.

So for deep learning, the question isn't CUDA vs OpenCL. It's cuDNN vs the alternatives (MIOpen, Intel MKL/DAAL).


My company is also working on broader hardware support, we haven't tried any of the Vega cards yet but we do have early results for OpenCL on hardware in our lab:

http://vertex.ai/blog/bringing-deep-learning-to-opencl


Thanks for the detail and clearing the waters I muddied a bit. I guess I'm just surprised that people are doing deep learning on AMD gpus, not because of the performance of OpenCL, but because of the reliance on Cudnn in frameworks like tensorflow and torch made that seem not worth it in the past.


Well, cuDNN doesn't run on AMD cards, so if you want your software to have good performance on all consumers' systems (as well as avoiding being vendor-locked), cuDNN isn't an option. Also, you have to get explicit permission from NVIDIA to redistribute cuDNN (though that reportedly is a formality).

Not being vendor locked in is pretty nice if the competition releases a 22 TFLOPS (fp16) card for only 400 USD. Well, they could have released it, if they had working fp16 support in their driver, sigh.


That's for inference though, the intensive part is training no? Then with a reasonable model (maybe distilled somehow from a larger one through more training) you could even do it off CPU or whatever.


fp16 is usable for training and inference, so the Vega would have been good for both.

I care about inference performance too, anyway. CPU is only a (much slower) fallback if the OpenCL drivers are broken.


Impossible for me to get excited about this. What are these really going to sell for after the introductory stocks run out - $1000?

Can't wait for this crypto madness to end.


The "launch price" of $499 (or £450) for the basic model was only for a very limited number of cards with a few vendors AFAICT. The prices immediately jumped £100 when they had sold (under an hour).

So the "MSRP" of $499 is effectively a lie at this point. What they will sell for when the crazy market forces we are experiencing right now kick in ... who knows. I suspect that depends on the hash rate people can force out of them.

-- edit -- am actually tempted to leave the one I've ordered boxed for now, just to see what happens to the market over the next week or so.


I feel the other way, I want power users to buy these in bulk and dump them for cheap on ebay once they are no longer powerful enough for mining.

Of course, many factors affect the final price on eBay and it's also about whether you need one today.


Right now it looks like the RX 480s and 580s get dumped for MSRP. After being abused like that.


It runs hot and sucks a lot of power. Are miners going to be interested in that?


Somebody is, the UK vendors I've watched are basically sold out now...


Where is Quantum [1] and the EHP[2]? Since we have Epyc with 4 die, and Thread Ripper with 2 die and 2 place holders, it's time for AMD to make a multi-chip module with Ryzen, Vega, and 2 stacks of HBM2 memory. That could be packaged into an awesome SFF PC much like their old Quantum concept.

[1] http://wccftech.com/amd-project-quantum-not-dead-zen-cpu-veg...

[2] http://wccftech.com/amd-exascale-heterogeneous-processor-ehp...

And please don't illuminate it with plain LED light, use laser diodes so we get that ethereal look of the interference pattern ;-)


Does anyone else also think the bundles are mostly a bad deal for the customers, and maybe only a good one for AMD?

First of all the monitor. Chances that I need one are fairly small. And if yes, why would I want exactly that one? I personally think the resolution (and especially pixel density) is below what I would want now, especially with so much GPU power available. A 4k screen with Freesync would be more attractive.

Then the CPU/board. Chances are a little higher that one could use that, but I guess even less than 40% of buyers would require a CPU and board. If we restrict it to Ryzen 1700 options the number would be even less, since a 1600X might be the more interesting option for many gamers. And of course some may also prefer Intel. If we treat the CPU/board part as the main benefit of the bundle it's also still mostly a zero-sum game, you pay 100$ extra to get a 100$ discount. I could most likely get a better overall deal when I buy both GPU and CPU seperatly in shops where each has the best price.

When we finally factor the games in it might be a small benefit for a small number of buyers. However their value is really small, since Prey is already some days old and one can get it discounted.

If the demand for the bundles is as low as I think and AMD really wants to sell most of the cards in bundled form (predicted by anandtech), then the overall prices will mostly trend towards the bundle prices for all cards due to availability.


The reason for the bundles is that they are trying to prevent purchases by miners. I do agree that they aren't a perfect solution


"Relative to the GeForce GTX 1080, we’ve seen power measurements at the wall anywhere between 110W and 150W higher than the GeForce GTX 1080, all for the same performance."

The Vega 64 has similar performance to the GTX 1080 for a similar price (if not for the fact that the new Radeons will never be in stock due to mining), but with much higher power consumption.


Why are they so power hungry? Vega looks very promising for Linux gaming, but that power consumption is just way overboard.


It is a trend that AMD overvolts their cards to push their clocks to compete with Nvidia. Undervolted every GCN generation comes in line with Nvidia in performance per watt, the problem is your actual performance is too low to be impressive then. In the end, the numbers win - so if you are at a slight disadvantage (and generally AMD makes one series of dies, and sells them to both the consumer and enterprise segments, whereas Nvidia will at the hardware level disable features of dies when going into gaming cards to eek more efficiency out of them) then cranking up the power draw to reach performance parity almost surely sells more units than being just as efficient but performing worse per die area (because wafers are near fixed cost).


This is unfortunately very disappointing. It practically can't even compete with a 1080 at the same price, and that card has been out for a year now.

Really goes to show how far ahead Nvidia is.


Only on DirectX 11 games, which I could care less about. When I buy a video card, I expect it to last 2-3 years, which means that the most important benchmarks are the DX12 and Vulkan ones. Vega has a significant lead on those.


If you could care less, then you do care at least somewhat. There are still PLENTY of DX11 games to play, considering a lot of us have backlogs the size of Texas. DX12 titles are very limited right now.


I also have a nice backlog of DX11 games. They work great on my 970 let alone a Vega.


Vulkan titles will overtake them.


Why do you think game studios will support vulkan when they can just target dx12 and get pc and xbox support?


Because Vulkan gets them Android and Switch support. They aren't insignificant enough markets to just ignore.

I think Sony said at some point they wanted to support Vulkan on the PS4 as well, but that never precipitated. You can use Vulkan on the PS4 GPU if you homebrew Linux on it, but thats not quite what developers are interested in.


They already do. There are more Vulkan supported games out, than DX12 ones as far as I know. Why has many variables. But simple answer, Windows 7 is still a common target (it has no DX12 support). More deeply, because forward looking studios don't want to fall into hard MS lock-in, and growing number of studios support Linux as well.

There was some good write up from Star Citizen developers on this topic, who decided to ditch DX12 for Vulkan.


Can you link a few? I feel like I'm missing something here because the only AAA game out with Vulkan support I can recall is Doom.


Just some: https://en.wikipedia.org/wiki/List_of_games_with_Vulkan_supp...

Plus, more games for Nintendo Switch will be using Vulkan. Also games with common engines like Unity, Unreal and Cry (that's in the future, they basically have just added or adding Vulkan support).


Is there a serious reason to believe that DX12 games are going to start coming out? Wikipedia only has 21 games on its list: https://en.m.wikipedia.org/wiki/List_of_games_with_DirectX_1...


Not so much DX12, but Vulkan ones yes. All major engines are now pushing Vulkan support out.


Certainly eventually? DX11 can't be the end of the line.


Sorry - to expect that in the next year and a half or so (it doesn't matter if the game comes out two days before you replace your card).

Also we may end up skipping to DX13.


Well I bit the bullet and ordered one of these, at the "special introductory price" of only as much as an nVidia GTX 1080...

The compute power seems pretty good from that review. YesI will be mining with it, so I'm hopeful of good results. I won't just be mining with it so I hope the game performance is worth it too!


Don't you have to carefully balance your mining speed and power consumption to be profitable. This seems like the worst card for that?


If you're buying the card purely for mining, sure.

If you want one anyway (i.e. you don't necessarily put the card cost in your sums), then the returns ought to be better than just the electricity costs. Especially if (say) mining to hold rather than sell straight off.

There are also rumours of outrageous hash rates being possible - multiples of the current nVidia hash rates. But we'll see if they ever come to fruition.


I really wonder if miners are going to buy the whole stock despite the game bundle program and the low power efficiency.


I doubt it, a 1070 is still better performance per dollar with the power consumption figured in


I wonder which model Apple will use in the iMac Pro.


I think Apple says 11 TFLOPS Single Precision and 22 TFLOPS Half-Precision, which puts the GPU at a Vega 56 I think? I don't think Apple has said how many possible GPU configurations there are, so you could be able to bump it to a 64.


You're correct -- the specs page of the new iMac Pro is intentionally vague, but they do confirm this much at least: https://www.apple.com/imac-pro/specs/

(The 64-variant will actually have 16GB HBM2, instead of 8, though.)


They're certainly not putting a 300 Watt gpu in an all-in-one enclosure. The only way that would work is if they used the closed loop cooler version, which is too bulky.


their stock price jumped 8% today, i wonder if it's too late to get in ?


AMD has been bouncing up and down between ~ 10-14 since ryzen hype




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: