No. I am asking if there are genuine new use cases that will be enabled by say a ten-fold increase in processing power? Other than it is nice going from 2K at 30 Hz to 4K at 120 Hz (which is an 8-fold increase).
Your question is difficult to answer in good faith because "everything" would benefit from a ten-fold increase in processing power. Super computers could process protein foldings faster, any kind of simulation really could benefit from it. Deep learning could eat a ten-fold increase overnight (especially if accompanied by faster buses and memory) gaming at 30Hz is really not great, gaming at 60/120/144Hz is much better. VR was already mentioned in this very thread.
I answered your question with a joke because your question is a joke, what doesn't benefit from a 10-fold increase in processing power? Why would no new use case arise from a widely available 10-fold increase in processing power when the last 40 years have shown that new tech always materializes?
Or maybe you just wanted to make a cynical statement, that all this tech from the last 20 years was pointless, that social media are a net negative for the world, that better video game graphics are pointless because only gameplay matter, that smartphones (only possible because of a previous wave of 10-fold increases) are only addictive little screens and not actually useful in everyday life?
Yes there are genuine new use cases that would appear with a ten-fold increase in processing power. It's a certainty.
> I answered your question with a joke because your question is a joke, what doesn't benefit from a 10-fold increase in processing power?
You misread the question pretty badly. They asked for new use cases, not things that would benefit.
We can fold proteins pretty well now, 10x wouldn't fundamentally change things.
There was a point where 10x in power made VR feasible. We're past that point, and can already do 144Hz VR with pretty good graphics. More power would increase the resolution but that's not a new use case.
Deep learning, hmm. A 10x increase in GPU RAM would be pretty great for using things like GPT-3, but the limiting factor there isn't really processing power.
Remember that you said "sorely need". That's a much stronger statement than benefiting.
> Why would no new use case arise from a widely available 10-fold increase in processing power when the last 40 years have shown that new tech always materializes?
Well, I can look back about a decade for the last 10-fold increase. Everyone has an SSD now, that's major but not based on processing power. VR is the only significant new use case I can name. That's cool but it's not exactly a big impressive list.
> Well, I can look back about a decade for the last 10-fold increase.
A decade ago (2011) we didn't even have "deep learning" with the real increase starting around 2013 when GPUs became good enough that things like AlexNet/ImageNet were practical so that's one.
> You misread the question pretty badly. They asked for new use cases, not things that would benefit.
I answered their question pretty directly, and I address your point in my reply as well. 10-fold increase in computing power brought us affordable smartphones (in 2011 iPhones existed but were mostly for the well-fortunated among us).
10-fold increase in processing power brought us the video streaming services of today. Without modern processing capabilities Netflix and co. probably wouldn't exist in any capacity. A 100Gpbs network interface just wasn't a thing in 2011 and it's not because engineers didn't have the idea.
A new Pixel 6 can erase people from my pictures automatically on-device, that's another very nice use case that wasn't possible with the hardware of even five years ago.
If you want to argue that a 10 years horizon wasn't large enough then sure, but there are literally thousands of use cases that only exist because computing scales and gets cheaper every year.
If we asked this question in similar terms just a few years ago we wouldn’t have things like consumer VR and AR.
You absolutely need high refresh rates and frame rates for VR, or else you get motion sick and/or lose immersion.
You’d pretty much always benefit from higher resolution for VR since the pixels are being placed much closer to your eyes and spread across your entire peripheral vision.
VR reduces the resolution your GPU can handle at the same level of performance/fidelity because two separate images are drawn.
The increase in performance in graphics cards is enabling entire industries to exist and accelerating scientific research.
Phones in your pocket are performing on-device ML and AR in ways previously thought impossible. They’re being used to shoot actual movies that are shown in actual theaters.
Low power tech like smartwatches wouldn’t be possible without these breakthroughs because ultimately faster processors also imply low power devices that can actually do decent amounts of computation.
So to answer your question, new use cases have already been unlocked by simply having more processing power to play with. It’s never been the case that all possible use cases are crystal clear before the tech that enables those use cases exists.
If you want to know why Facebook changed their name to Meta, it’s actually because they see a near-future of VR/AR devices getting a lot less clunky to the point where they can offer a seamless virtual social network and/or truly next generation video conferencing where everyone feels like they’re in a room together. While the exercise may appear to be damage control from an arrogant billionaire, I can see the argument and business case for their vision.
> If we asked this question in similar terms just a few years ago we wouldn’t have things like consumer VR and AR.
Things won't stagnate just because the sales pitch for a new product might be "no new use cases, it 'only' makes things smoother and a quicker and higher detail".
Getting that out of the way, not every increase in power is equally useful. It's a fair question to ask. VR is good and newly enabled but that doesn't mean we're going to get a bunch of tech with similar potential in the next handful of years.
> Phones in your pocket are performing on-device ML and AR in ways previously thought impossible. They’re being used to shoot actual movies that are shown in actual theaters.
The fancy algorithms are somewhat important to picture quality, but for photos slower would be okay and for video you could run it in post without much disruption.
> Low power tech like smartwatches wouldn’t be possible without these breakthroughs because ultimately faster processors also imply low power devices that can actually do decent amounts of computation.
On the other hand, we've had pretty complex devices that run on button cells available for decades. And a lot of that power is taken up by the screen and radios rather than processing. And... wait a second, the original Apple Watch came out so long ago, it was around 8x slower than the current model.
Basically what you’re saying is that we can get by with worse tech.
Well, yes. You can use a Casio calculator watch from the 80’s to make calculations. We got by just fine before computers existed at all.
But that’s not really the point. The point is that there is value in simple raw horsepower. If there wasn’t, all these chip manufacturers would pack up their bags and shut down their R&D departments, because businesses don’t pay people to make things that have no market value.
It is an inevitability, in my opinion, that extra horsepower will enable some kind of new experience, however minor. Your example of the original Apple Watch is perfect here. The original Apple Watch barely functioned, for staters, and had a reputation for being slow. It didn’t have GPS, cellular, always-on screen, or a number of other features that need a very powerful and power efficient chip, display, and wireless modem/chip. All of these features are things that have market value.
I’m honestly not sure what you’re trying to say we should do here, not try to make computing faster? Why?
> Basically what you’re saying is that we can get by with worse tech.
> I’m honestly not sure what you’re trying to say we should do here, not try to make computing faster?
No. I'm saying that if you look at tech, generation by generation, faster is sometimes just faster. It doesn't always enable new use cases.
There's no extra subtext. I'm not saying we should stop making things faster. I just don't want the benefits of increased speed/efficiency to be overstated.
I don't think "we sorely need faster matrix multiplication" is true. It's just a nice-to-have. Which is still enough reason to make it! But it's a different level of benefit.
> It is an inevitability, in my opinion, that extra horsepower will enable some kind of new experience, however minor.
Sufficiently minor things aren't "genuine new use cases" that we "sorely need".
Eventually you'll hit a new use case, probably, but you might get zero new use cases for a long time.
> Your example of the original Apple Watch is perfect here. The original Apple Watch barely functioned, for staters, and had a reputation for being slow. It didn’t have GPS, cellular, always-on screen, or a number of other features that need a very powerful and power efficient chip, display, and wireless modem/chip. All of these features are things that have market value.
Not very much of that depends on computation getting faster.
I replied to your other comments but now I see that you are arguing in bad faith and I regret the time investment.
> It doesn't always enable new use cases.
It doesn't have to always enable new use cases, you are just moving the goalpost.
> Sufficiently minor things aren't "genuine new use cases" that we "sorely need".
A use case is a use case, you don't get to redefine what you consider a "major" and a "minor" use case to justify your bad arguments. That's another example of moving the goalpost.
> Not very much of that depends on computation getting faster.
That statement is so objectively wrong, power efficiency is a trade off with processing power, if we can make a faster chip we can make a similarly capable chip that consumes less power.