The last time this conversation came up, a slew of people called it a strategic mistake for AMD. I'm actually willing to bet it's the opposite (disclosure: I'm long AMD).
Something tells me AMD decided to make strange bedfellows this time around against a common growing threat in multiple spaces, to include graphics as well as machine learning (nVidia). It's one agreement, but for AMD, it's both a revenue source as well as a foot in the door for potential collaborative efforts which could give it some stability as it broadens its offerings and looks at new avenues for value creation in the face of headwinds such as resistance in further transistor shrinkage.
I think the real elephant in the room is ARM. iPad Pro has the performance of (at least) a toaster “real” laptop. Sure Apple is expensive but very soon you are going to see cheap and actually good ARM PCs. That can’t be good for either Intel or AMD.
It seems like only Apple has made a performant ARM core so far, though. Will they license the technology? It doesn't fit their historical business model. Or do you think someone else is going to make ARM fast?
There's really nothing to stop another ARM licensee making more powerful ARM processors, there's no special Apple pixie dust, it's just a matter of investment. The other licensees just got lazy and psychologically trapped by their price models into thinking in terms of deterministic, incremental annual performance and feature improvements. They 'knew' nobody else would break that cycle because the margins weren't big enough to support the investment needed to jump so far ahead. If the margins and volume in fast efficient ARM laptops are there, then there's nothing stopping them from happening.
One of the huge differences is Apple hasn't gone down the root of making octo-core ARM CPUs. They stuck to dual-core designs all the way until (and including) the iPhone 6S, then the iPhone 7 going quad-core, and the iPhone 8 going hexa-core. They've very clearly avoided following the big.LITTLE that most ARM licensees have followed.
> There's really nothing to stop another ARM licensee making more powerful ARM processors, there's no special Apple pixie dust, it's just a matter of investment.
Apple CPUs are designed in-house, not by ARMH. It's not magic pixie dust, but designing a CPU core architecture from scratch is a non-trivial investment of which very few companies are capable.
Really? Only Apple?? What about Qualcomm or Cavium? Arm cores are now in the premier systems offered by Cray and HPE. See here for comparison to Skylake for Crays system: https://twitter.com/simonmcs/status/930425114922958848 Here for HPE’s: https://twitter.com/hpc_guru/status/930194415787888642 there’s also the Centriq from Qualcomm, I think there were some numbers posted at the goingarm HPC users group. Bottom line, there are plenty of super fast Arm processors that match or beat the best Intel can put out.
Why should an iPad need laptop like performance? The iPad is still a consumer device ( probably going to get downvotes because this statement), while something like Surface Pro is both.
The iPad Pro, at least, is explicitly positioned as a working machine, with "far more power than most PC laptops":
The A10X Fusion chip with 64‑bit architecture and six cores puts incredible power in your hands. So you can edit a 4K video on the go. Render an elaborate 3D model. Or create and mark up complex documents and presentations.
> The A10X Fusion chip with 64‑bit architecture and six cores puts incredible power in your hands. So you can edit a 4K video on the go.
I'm not discussing the current hardware, i acknowledge their marketing about it. I'm discussing the tools and usability on a iPad Pro. I'm saying that the usability of a tablet, is merely consumerism and doesn't needs the power of the iPad Pro.
Ps. I had both in my hands and i can create stuff far easier on a Surface Pro vs iPad Pro. I can't imagine even development on a iPad, i can do it on a Surface Pro though.
Supposedly Jim Keller did work on that, besides Zen.
They simply stopped talking about their supposedly awesome new ARM chip. My personal guess is that they're reworking it, targeting RV64G now; Because why bother with ARM licensing.
> My view is CUDA has already won, and everyone else needs to get over it. Even clang supports PTX now, which is a reasonably device agnostic representation, albeit controlled mostly by Nvidia. Perhaps intel will introduce their own extensions to this ISA.
> Even if my precompiled CUDA application could run on Intel GPUs at 50% of the throughput, I'd be happy if I could later tweak and recompile it to get the full benefits from their hardware.
How much more convincing do you need that it will be Apple? No one comes close to them in ARM performance. The iPad Pro’s processor is so overpowered compared to the software you can run on it, it’s incredible that we don’t have ARM MacBooks already.
And both times they had a transitional period in which their toolchain was modified to produce "fat" binaries, including executables for both the old and new architectures.
I find it stunning that people think Apple couldn't pull this of a third time.
Both times, the performance boost of the new architecture significantly offset the emulation overhead. It's going in reverse this time, emulating the old instruction set with a weaker processor.
Further, there's a lot more focus on battery life these days, and while many apps will be tolerable from a UX perspective if you halved their speed, that also means the CPU is spending twice as long at the highest turbo states. Ouch.
I also think Apple could pull it off, and actually want to see it happen for the health of the PC industry. But I haven't seen anyone declaring it's not possible, I think that's a bit of a straw man.
Both times the new CPU architecture was faster than the older one, granted. But still the emulated software would be slower on the new models with the emulator compared to bare metal execution on the old models.
I assume Apple will make some more improvements to close the gap between x86 and ARM, and switch when it makes sense for them. I also assume that, with the Mac App Store, they will start forcing developers to ship software compiled to both architectures ahead of making the switch. This way, only software that is distributed outside their store would be slow, which is going to become more and more niche as Apple enabled stricter signing enforcement by default in macOS Sierra (by default, only apps from the Mac App Store are allowed to run).
Given that Apple has full control over the CPU itself, I wouldn't be surprised if the first ARM Mac has a CPU with a handful of extra instructions which bridge the performance gap. I bet they already did extensive profiling of x86 code from any major app (esp. those submitted to the app store) and identified 5-6 instructions (to be made available to the emulation layer only) which avoid some of the most expensive translation / abstraction / emulation steps.
They don’t have full control. Apple can’t add instructions, only Arm can. If they want something new they have to go back to Arm, sure hey probably have lots of pull, but that’s the way it is. And one reason ppl like RiscV.
Actually - if I'm not mistaken - what you say holds true for almost any other company but _not_ for Apple: They are holders of an architectural license, which gives them (and very few others in the world) this freedom.
Architecture license means you can use the ISA (provided by Arm). Micro arch means you can license the core implementations form Arm. Apple is an ISA license, meaning they have full control of the micro arch as long as it passes verification by Arm. That’s the beauty of Arm, you get rid of the problem of creating the whole software ecosystem...it’s already there, and companies like Cavium, Apple, Qualcomm are free to add their secret sauce and benefit from the mature software environment. At least, that’s the hope.
Prior to ARMv8, all licensees could add additional specialized instructions or even entire instruction sets using the ARM coprocessor model.
By changing a few registers, you could alter how instructions were interpreted. This was used by some vendors for hardware-software interfacing, and by others to provide specific acceleration primitives for their specialized workloads.
ARM used this mechanism themselves to offer an optional floating point unit, with specialized vectorized floating point instructions.
Unfortunately, support for coprocessors has been removed from ARMv8.
I think it's more likely that Apple will just let OS X and the desktop/laptop computer form die on the vine as they turn iOS devices into more and more of a 'pro' product with more and more of OS X's features and more input modalities, etc.
To me this is a bit of a frightening scenario given the closed nature of iOS.
I was interested to learn that, though the "OP1" processor in the Samsung Chromebook Plus is manufactured by Rockchip, the "OP" brand belongs to Google and is used for ARM CPUs optimized for Chromebooks: https://www.theverge.com/2017/2/22/14691396/google-chromeboo...
I used Samsung Chromebook from 2012 with Exynos 5 ARM CPU for Java development running Eclipse and a web server locally. It was not speedy but very OK setup for a laptop costing 250USD. The main problem was speed and amount of memory, not raw CPU performance.
Given that I expected that some company would release ARM laptops with better specs memory-wise especially after introduction of ARM64 CPUs, but that did not happen. I guess Intel has still too much grip on the industry.
Technically there already is one - the Touchbar is controlled by a dedicated ARM co-processor.
There was a report on MacRumors [1] (yes, so take it with a grain of salt) that the iMac Pro is likely to ship as an A10 Fusion + Intel Xeon.
The A10 would apparently be 'always-on' for things like "Hey Siri", and would actually handle the boot process, passing the EFI firmware for the Xeon to boot with.
I’ve been wondering if it has to do with ATIs amdgpu opensour e driver work
Intel announced new performance graphics parts on the way, they’ll be learning a lot from AMD and seems reasonable to say they’ll be leaning on amdgpu for these parts
If they’re quietly teaming up to support an open modern video driver architecture, that would help put pressure on nVidia. While historically the better performer, nVidia drivers can be fickle.
I stil call it ATI by accident because that’s what it was during my formative years (middle school). Calling it “AMD” feels weird to me. It’s hard for people to learn a new word for something they already know.
Do what I do, I call AMD CPUs "AMD", but AMD GPUs "Radeon"; but never call Nvidia GeForces "GeForces", instead just referring to them as Nvidia.
So, it isn't unusual for me to say "Intel", "AMD", "Radeon", "Nvidia" during the course of a conversation. AMD ended up legitimizing what I do when they split Radeon off as its own company (ala HTX.org and GloFo).
They didn't make Radeon into its own company, in fact there are rumors Raja Koduri was butting heads with leadership over a desire to do just that. They split it into a separate group within the company just that's just playing with the org chart to change who you report to.
That's what I felt when I first heard the news ... then Raja jumped ship to Intel to head up a discrete GPU business, so I think Intel view this as more of a stop-gap "necessary evil" and will only entertain it for a limited time.
That's assuming Raja still has the chops to compete at the top table, something his latest tenure at AMD would call into doubt ...
IM not sure its that much of bedfellows, with this they are not sharing their IP with Intel, these GPUs are discrete chips that AMD is just selling to Intel who is then selling them alongside their CPUs on the same package.
it's not like AMDs GPU designs are going shared with Intel. Nor are they giving up control of fabrication.
Both Intel and AMD are pretty meh when it comes to deep learning. I hope their partnership will give us more options, but I'm not sure they can deliver.
So Intel are going to get a better GPU than they can currently design in their integrated parts, while at the same time selling more CPUs and be given time to restart their GPU design team? I just don’t get how this doesn’t lead to huge litigation when in two years Intel has a competitive GPU and tells AMD to do one. AMD will then complain about their GPU technology being stolen but I’d bet a judge would possibly be convinced it’s their own fault.
Instead a better scenario; AMD embedded parts that are millions of miles ahead in gfx and close in CPU performance for most task leads to loads of design wins for Ryzen + AMD GPU.
So yes I do think this is a terrible idea. Why you think Intel aren’t busting a gut to compete (this deal or not) you are wrong; they’ve already lowered prices and will move away from AMD pretty fast in IPC over the next two years.
I'm a huge fanboy of open source, and this move is a boon for Linux given Intel HD's and AMD's GPU OSS drivers on the laptop as well as the desktop. Not to mention it will further emphasize the division of gaming vs non-gaming rigs, meaning NVidia vs AMD(ATI?)/Intel, and hopefully extinguish NVidia completely from business/work-purpose computers (or even gaming rigs if possible, since AMD/ATI's GPUs are rather decent too!). Forgive my severe, palatable dislike for NVidia in this post.
Looks like it's time to upgrade my ThinkPad once Lenovo integrates this setup into their T line.
I like OSS as well, but (correct me if I am wrong) are AMD GPU's way inferior for ML/Deep learning than that of NVidia? I am moving away from macOS to ThinkPad. I am specifically waiting for an 8th Gen CPU from Intel with NVidia discrete GPU for small scale ML and light gaming.
What is the size of the ML market compared to the gaming market? I get the impression that the ML market for GPU chips is growing much faster than the gaming market.
>and halves the power usage of a traditional design.
Um, really? Rumors talk of a 65W (and a 100W) Kaby Lake G CPU. The CPU is 4C/8T @ 3.1/4.1. That's 10% more than the 7700HQ which is 45W. The GPU side is rumored to be 1000MHz 24CU and 4GB HBM2 @ 700 MHz.
Now the Ryzen 5 2500U is a 15W part with 8 CUs up to 1100 MHz. The CPU surely eats some watts out of that so I do not think it's too outlandish to claim 24 CUs eat less than 30W especially with a lower frequency.
All in all, I absolutely can't see the claimed gigantic power usage savings Intel purports here. It looks like 10% perhaps 15% based on the above data. What am I missing?
I understood that statement as saying they were halving the power usage compared to a traditionally designed chip interconnect that uses ordinary traces rather than the EMIB silicon chip talked about in the article. Not halving the power usage of the entire chip, just the power required for the chip connection.
Bewildering. Does a chip interconnect consume anything worthwhile mentioning? Wouldn't that be converted to heat? I have never heard of the motherboard itself heating up...
It is the drivers on the chip required to go off-chip that consume power and create heat. Look at a die photo. Those wee little transistors in the middle? Logic and memory. Those giagantic cow turds around the edge? I/O drivers.
Could this have something to do with Apple shipping their own chips with their desktop computers? I would think that Apple would be interested in using those new Intel/AMD chips on their laptops.
It definitely looks like a survival move to counter the upcoming AXX Apple chips in the next few years. New iMac Pro apparently has a A10 in it for boot up sequence.
Intel claims that they made some kind of bus for similar stuff (to be able to easily add 3-rd party chips to their processor) and AMD GPU is just one example of application of this bus. That said, obviously their GPUs are inferior to AMD so if customers want powerful GPU, the only choice was to ship a separate PCI-E video card. Now Intel offers another choice which is probably better (for example I expect more PCI-E lanes for SSD and other things).
Oh, and the reason they’re doing this is ARM offerings from Qualcomm and Nvidia are very likely going to start seriously entering Desktop PCs. Some people may argue they’ll finally make a serious push into servers too, but that has failed so many times that I’m not holding my breath.
Why is nobody reading the article? They are using innovative chip packaging to integrate an amd discrete laptop gpu and an intel cpu into a single package. The intel cpu still has an iGPU.
If anyone is threatened it is Nvidia because nobody needs their mobile GPUs anymore.
Intel actually announced that they’re going to make their own discrete GPU in addition to selling AMD’s. Both the AMD offering and the Intel discrete offering feature the standard integrated Iris GPU for non-3D gaming (normal desktop graphics).
When TensorFlow came out I happily downloaded it on my Macbook only to learn that the "Iris GPU" is not supported (apparently there are NVIDIA based Macbooks too which are more expensive). What a bummer. Search and search, there is no solution, because Intel doesn't give a fuck about DL.
So now I enjoy seeing them desperate, but it will take a lot to convince me of their dedication.
> only to learn that the "Iris GPU" is not supported
You can use the CPU backend, which has gotten significant optimization since ~v1.2. You can probably use the integrated GPU with the OpenCL backend, but I am not sure it will be faster than a modern AVX2/3 CPU.
Yes and no.
Intel Integrated Graphics was always a graphic solution executed by the CPU.
This here solves multiple problems. A) The Computer has a dedicated graphics chip. B) The computer has an incredible high bandwidth between cpu/gpu/main memory.
C) The CPU will be capable of utilizing the Special 3dimension graphics memory. Which is patented by AMD. (HBM)
This is a way more powerful solution than integrated graphics and thus puts a lot of pressure on Nvidia for laptops and mobile maybe even for office computers.
HBM, HBM2, HBM3 etc. are industry standard and the result of precompetitive research. Intel's KNL has been using similar stacked memory (MCDRAM). NVIDIA's P100 has been using HBM2.
The reason why I have doubts is that Intel has proven to be a rather weak competitor in anything other than multi- (but not many-) core x86. And their haphazard software strategy seems to be a major factor here. Especially the way Larrabee/Xeon Phi has gone so far, their push for OpenCL is IMO too little too late. Instead, OpenMP was promoted, with the vectorization basically brushed off as the magic the compiler will do for you... well it won't, and if it doesn't, you have no way to compete against Nvidia GPUs. Time will show whether they've now recognised this and work together with AMD in an effective way, but yeah...
Why wouldn't they do this? Intel is a fab company, architecture second. The main reason they bought Altera was EMIB. EMIB frees them from spending huge amounts of resources on cutting edge Architecture and Microarch, freeing them to focus on pushing the boundaries on their foundries (which are falling behind). Using EMIB enables Intel to incorporate whatever IP will sell in the market, theirs or somebody else's. The alternative for Intel is to drop their foundries and use GF or TSMC (don't see that happening). I can see Intel using Arm, AMD, Nvidia (if they'll let them, Nvidia seems to be pushing the discrete path hard...that'll fail given high latency of PCIe). Arm has already arrived in the HPC market, SC in Denver's unofficial theme was Arm (both Cavium and Qualcomm). There are even awesome desktop machines now (https://www.avantek.co.uk/store/avantek-32-core-cavium-thund...). HPC is the vanguard for server, already Arm compatible chips are providing perf greater than Skylake Intel server parts at a cheaper price point...why pay for Intel when you can have a Qualcomm Centriq or Cavium Thunder X2? IBM Power is yet another option, but hugely expensive. Definitely useful for GPGPU accelerated applications, but like Intel, IBM is going to be given a run for its money by AMD in sheer number of PCIe lanes.
I don't agree. SC's unofficial theme was AI/ML. Every single vendor and HPC center were talking AI, not all were talking ARM. The student cluster was dominated by GPUs, not a single one ran ARM.
We don’t really care about ML. If just so happens ML and AI are easy compared to what HPC normally deals with. ML is just statistics and learning functions which is in turn really dominated by the linear algebra. It’s hard to hear that when you like buzzwords, but it’s just algebra and pretty simple at that. That’s why things like grad student built systolic array processors (popular in he 60’s) dominate here. It’s also why SC is showing lots of ML...because the community knows how to do that better than anybody.
I'm curious to see how these will be priced. They could save on costs over a dedicated gpu by sharing cooling systems and other parts, which would be exciting.
This whole thing looks a lot like AMDs EHP concept from a couple years ago. In fact I've been waiting for the thread ripper/EPYC approach to get a GPU and HBM in the same package with Ryzen. That would be the perfect module to put into the AMD project quantum concept PC. OTOH the Intel solution is targeted at laptops. Let's hope AMD does the same for the desktop and soon.
Omitting a bottleneck like the PCIe bus between the CPU and GPU makes memory sharing between the two much more viable. This allows a totally different kind of offloading of work from the CPU.
intel will ship a package that has an AMD GPU+HBM combined with a CPU but they are not the same chip, basically, Intel is buying already fabricated GPU chips from AMD and putting them together with their CPU to sell them as one unit.
This seems doubly bizarre, in that high-end Intel processors already tend to have integrated graphics hardware, yet high-end AMD processors don't.
For instance, I built a new computer using an AMD Ryzen processor recently (solely for its CPU speed). I hadn't realised that the CPU had no graphics hardware at all, and that I would need a separate graphics card in order to get any output on a monitor. Whereas previous computer builds using i7 CPUs need no extra hardware.
It seems to largely be a cost issue, since AMD does not exactly have huge amounts of cash to pour into lots of different chips designs for specific purposes like Intel.
AMD wanted to release 8 core parts that competed on price with Intel's 4 core parts. Roughly half the die area (and thus manufacturing cost) of Intel's quad core desktop CPUs is the GPU, so by cutting the GPU they were able to make 8 cores at similar cost to Intel's 4 cores + GPU.
It also allowed them to reuse the same chip for desktop, workstation, and server parts dramatically reducing R&D costs for the whole lineup. And really "high end" Intel processors, as in everything with 8 cores or more, also lack the GPU anyway.
The Raven Ridge CPUs are 4 cores + GPU (with much faster GPU than anything Intel has). I imagine they'll do a higher core count part with GPU once Globalfoundries starts making 7nm chips.
Yeah exactly. No one who values their security or privacy should even consider buying one of these. Especially businesses/corporations as you have no idea what is running in that hidden operating system. [1]
When I slip on my tinfoil hat I wonder if that's the reason for this. There's nothing here that AMD couldn't offer on it's own so why combine their graphics with Intel CPUs if not for the ME?
Any customer who would buy this would probably buy an AMD APU instead and AMD would take home more of the money without giving their competitor anything. AMD also has the capability to produce almost the exact same product using their own processor instead of Intel, so again why help your competitor by sharing the profits?
I'd say either AMD doesn't have the production capacity for whomever the customer is (Apple?) or its something more nefarious like preserving ME.
Given that AMD is still a close second in graphics, maybe it really is a way to increase volumes for their GPUs. But that still suggests there's some reason they can't deliver the same with their own CPUs.
Intel has more of the existing partnerships with OEMs, and that isn't going to reverse immediately with a strong showing from AMD. The new Ryzen Mobile notebooks are low-to-midrange entries and groundbreaking in their category, but Intel still covers that higher end segment.
It's easier to see this from Intel's perspective: Intel's biggest threat is from the onrush of GPU-driven computing, and Nvidia is the market leader there. The classic play is to starve them of oxygen by leveraging existing channels to get them out of the gaming notebook market. Thus comes this weird saga with AMD and Raja which is a win win deal in fact: Intel gets ammo to fight Nvidia now and a key hire for their own development later, and AMD gets another source of cashflow and marketshare plus a graceful exit for what has been reported as shaky, conflicted executive management at RTG. Although they've delivered decent hardware and software recently, there have been numerous PR flubs from the group and the business unit performance is questionable overall. There is an open question of what happens to RTG later on, but perhaps the answer is simply to survive in Intel's shadow again.
I can buy that. I didn't know there were issues at RTG. The point about OEMs not switching quickly is also a reality. It still seems really strange. The next move could be nVidia making their own high performance CPUs to integrate with their GPUs. Think risc-v here, they've already embraced it and the potential and freedom to innovate are wide open.
The last time AMD got into a fight with Intel, Intel pulled out all the stops. The only relief AMD got--quite a ways down the road--was in the form of damages. https://en.wikipedia.org/wiki/Advanced_Micro_Devices,_Inc._v....
Something tells me AMD decided to make strange bedfellows this time around against a common growing threat in multiple spaces, to include graphics as well as machine learning (nVidia). It's one agreement, but for AMD, it's both a revenue source as well as a foot in the door for potential collaborative efforts which could give it some stability as it broadens its offerings and looks at new avenues for value creation in the face of headwinds such as resistance in further transistor shrinkage.