> And I suspect that kind of story will continue quite a bit as the tech matures.
Don't you think that this tech can only get better? And that there will come a time in the very near future when the programming capabilities of AI improve substantially over what they are now? After all, AI writing 300 line programs was unheard of a mere 2 years ago.
This is what I think GP is ignoring. Spreadsheets couldn't to do every task an accountant can do, so they augmented their capabilities. Compilers don't have the capability to write code from scratch, and Python doesn't write itself either.
But AI will continually improve, and spread to more areas that software engineers were trained on. At first this will seem empowering, as they will aid us in writing small chunks of code, or code that can be easily generated like tests, which they already do. Then this will expand to writing even more code, improving their accuracy, debugging, refactoring, reasoning, and in general, being a better programming assistant for business owners like your friend than any human would.
The concerning thing is that this isn't happening on timescales of decades, but years and months. Unlike GP, I don't think software engineers will exist as they do today in a decade or two. Everyone will either need to be a machine learning engineer and directly work with training and tweaking the work of AI, and then, once AI can improve itself, it will become self-sufficient, and humans will only program as a hobby. Humans will likely be forbidden from writing mission critical software in health, government and transport industries. Hardware engineers might be safe for a while after that, but not for long either.
You are extrapolating from when we saw huge improvements 1-2 years ago. Performance improvements have flatlined. Current AI predictions reminds me of self-driving car hype from the mid 2010s
Multimodality, MoE, RAG, open source models, and robotics, have all been/seen massive improvements in the past year alone. OpenAI's Sora is a multi-generational leap over anything we've seen before (not released yet, granted, but it's a real product). This is hardly flatlining.
I'm not even in the AI field, but I'm sure someone can provide more examples.
> Current AI predictions reminds me of self-driving car hype from the mid 2010s
Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?
I can see AI skepticism is as strong as ever, even amidst clear breakthroughs.
You make claims of massive improvements but as an end user I have not experienced such. With the amount of fake and cherrypicked demos in the AI space I dont believe anything until I experience it myself.
>Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?
No because usage is limited to a tiny fraction of drive-able space. More cherrypicking.
Just because you haven't used text generation with practically unlimited context windows, insight extraction from personal data, massively improved text-to-image, image-to-image and video generation tools, and ridden in an autonomous vehicle, doesn't mean that the field has stagnated.
You're purposefully ignoring progress, and gating it behind some arbitrary ideals. That doesn't make your claims true.
No. The progress is not being ignored. Normal people just have a hard time getting excited for something that is not useful yet.
What you are doing here is the equivalent of popular science articles about exciting new battery tech - as long as it doesn’t improve my battery life, I don’t care.
I will care once it hits the shelves and is useful to me, I do not care about your list of acronyms.
I was arguing against the claim that progress has flatlined, and when I gave concrete examples of recent developments that millions of people are using today, you've now shifted the goalpost to "normal" people being excited about it.
But sure, please tell me more about how AI is a fad.
You are seeing enemies where there are none. I am merely commenting on AI evangelists insisting I have to be psyched about every paper and poc long before it ever turns into a useable product that impacts my life.
I don’t care about the internals of your field, nobody does. Achieve results and I will gladly use them.
There is a bit of a fallacy in here. We don’t know how far it will improve, and in what ways. Progress isn’t continuous and linear, it comes more in sudden jumps and phases, and often plateaus for quite a while.
Fair enough. But it's enough of a fallacy as asserting that it won't improve, no?
The rate of improvement in the last 5 years hasn't stopped, and in fact has accelerated in the last two. There is some concern that it's slowing down as of 2024, but there is a historically high amount of interest, research, development and investment pouring into the field that it's more reasonable to expect further breakthroughs than not.
If nothing else, we haven't exhausted the improvements from just throwing more compute at existing approaches, so even if the field remains frozen, we are likely to see a few more generational leaps still.
AI can’t write its own prompts. 10k people using the same prompt who actually need 5000 different things.
No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.
Information has to come from somewhere to differentiate 1 prompt into 5000 different responses. If it’s not coming from the people using the AI, where else can it possibly come from?
If people using the tool don’t know how to be specific enough to get what they want, the tool won’t replace people.
What makes you say that? One model can write the prompts of another, and we have seen approaches combining multiple models, and models that can evaluate the result of a prompt and retry with a different one.
> No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.
No, but it can certainly produce output until the human decides it's acceptable. Humans don't need to give precise guidance, or answer technical questions. They just need to judge the output.
I do agree that humans currently still need to be in the loop as a primary data source, and validators of the output. But there's no theoretical reason AI, or a combination of AIs, couldn't do this in the future. Especially once we move from text as the primary I/O mechanism.
I predict the rate in progress in LLMS will diminish over time, whereas the difficulty of an LLM writing an accurate computer program will go up exponentially with complexity and size. Is an LLM ever going to be able to do what say, Linus Torvalds did? Heck I've seen much less sophisticated software projects than that which it's hard to imagine an LLM doing.
On the lower end, while Joe Average is going to be able to solve a lot of problems with an LLM, I expect more bugs will exist than ever before because more software will be written, and that might end up not being all that terrible for software developers.
I don't know about the rest of the developers in the world but my dream come true would be a computer that can write all the code for me. I have piles of notebooks and files absolutely stuffed with ideas I'd like to try out but being a single, measly human programmer I can only work on one at a time and it takes a long time to see each through. If I could get a prototype in 30 seconds that I could play with and then have the machine iterate on it if it showed promise I could ship a dozen projects a month. It's like Frank Zappa said "So many books, so little time."
If that will be the case then in a finite and small amount of time all your ideas will already have a wide range of implementations/variations because everybody will do the same as you.
It is like now LLMs are on the way to take over (or destroy) content on the web and will take over posts on social media thus making anyone create anything so fast that the incentive to put manual labor into a piece of content is becoming irrelevant in some ways. You work days to write a blog post and publish it and in the same time 1000s of blog posts are published along with yours fighting for the attention of the same audience. who might just stop reading completely because of so much similar things.
Honestly, I don't care if other people are creating similar things to what I am. Actually, I prefer if there are more people working on the same things because it means there are other people that I can talk to about those things and collaborate with. Even if I don't want to work with others on a project I'm not discouraged by other implementations existing, there's always something that I would want to do differently from what's out there. That's the whole point of building my own things after all; if I were happy with using whatever bog standard app I could find on the web then why would I need to build it? It isn't just about making it my own either, it's also about the fun of diving into the guts of a system and seeing how things work, having a machine capable of producing that code gives me the fantastic ability to choose only the parts I'm interested in to do a deep dive on while skipping the boring stuff I've done 1000x times before and I don't have to type the code if I don't want to, I can just talk to the computer about the implementation details and explore various options with it. That in itself is worth the time spent on it but the awesome side effect is you get a new toy to play with too.
Don't you think that this tech can only get better? And that there will come a time in the very near future when the programming capabilities of AI improve substantially over what they are now? After all, AI writing 300 line programs was unheard of a mere 2 years ago.
This is what I think GP is ignoring. Spreadsheets couldn't to do every task an accountant can do, so they augmented their capabilities. Compilers don't have the capability to write code from scratch, and Python doesn't write itself either.
But AI will continually improve, and spread to more areas that software engineers were trained on. At first this will seem empowering, as they will aid us in writing small chunks of code, or code that can be easily generated like tests, which they already do. Then this will expand to writing even more code, improving their accuracy, debugging, refactoring, reasoning, and in general, being a better programming assistant for business owners like your friend than any human would.
The concerning thing is that this isn't happening on timescales of decades, but years and months. Unlike GP, I don't think software engineers will exist as they do today in a decade or two. Everyone will either need to be a machine learning engineer and directly work with training and tweaking the work of AI, and then, once AI can improve itself, it will become self-sufficient, and humans will only program as a hobby. Humans will likely be forbidden from writing mission critical software in health, government and transport industries. Hardware engineers might be safe for a while after that, but not for long either.