The great thing about LLMs being more or less commoditized is switching is so easy.
I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.
It's pretty rare that switching costs are THAT low in technology!
Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.
FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.
The switching cost is so low that I find it's easier and better value to have two $20/mo subscription from different providers than a $200/mo subscription with the frontier model of the month. Reliability and model diversity are a bonus.
I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?
Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.
> It's pretty rare that switching costs are THAT low in technology!
Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.
good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now.
Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.
You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.
Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.
Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.
This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.
Could it be that you're creating a stereotype in your head and getting angry about it?
People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.
I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.
You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.
I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.
Wow are these submitted automatically by claude code? I'm not comfortable with the level of details they have (user's anthropic email, full path of the project they were working on, stack traces...)
Definitively some automation involved, no way the typical user of Claude Code (no offense) would by default put so much details into reporting an issue, especially users who don't seem to understand it's Anthropic's backend that is the issue (given the status code) rather than the client/harness.
A long time ago I was taking flight lessons and I was going through the takeoff checklist. I was going through each item, but my instructor had to remind me that I am not just reading the checklist - I need understand/verify each checklist item before moving on. Always stuck with me.
A few times a year I have to remind my co-workers that reading & understanding error messages is a critical part of being in the IT business. I'm not perfect in that regard, but the number of times the error message explaining exactly what's wrong and how to solve it is included in the screenshot they share is a little depressing.
This is the kind of abuse that will cause them to just close GitHub issues.
Or they'll have to put something in the system prompt to handle this special case where it first checks for existing bugs and just upvotes it, rather than creating a new one.
I've made a feature request there to add another GitHub Actions bot to auto-close issues reporting errors like this when an outage is happening. Would definitely help to cut through the noise.
There has to be some sort of automation making these issues, to many of them are identical but posted by different people.
Also love how many have the “I searched for issues” checked which is clearly a lie.
Does Claude code make issue reports automatically? (And then how exactly would it be doing that if Anthropic was down when the use of LLM in the report is obvious )
Thats exactly what they Anthropic deserves (btw they cant even get Anthropic on github lmao, this must be the biggest company having to run with wrong ID on github)
Goes to show that nobody reads error messages and it reminds me of this old blogpost:
> A kid knocks on my office door, complaining that he can't login. 'Have you forgotten your password?' I ask, but he insists he hasn't. 'What was the error message?' I ask, and he shrugs his shoulders. I follow him to the IT suite. I watch him type in his user-name and password. A message box opens up, but the kid clicks OK so quickly that I don't have time to read the message. He repeats this process three times, as if the computer will suddenly change its mind and allow him access to the network. On his third attempt I manage to get a glimpse of the message. I reach behind his computer and plug in the Ethernet cable. He can't use a computer.
Hey folks, I’m Alex from the reliability team at Anthropic. We’re sorry for the downtime and we’ve posted a mini retrospective on our status page. We’re also be doing a more in depth retrospective in the following days.
For folks who have latest version (0.4.1) LM Studio installed, I just noticed they added endpoints for being compatible with Claude Code, maybe this is an excellent moment to play around with local models, if you have the GPU for it. zai-org/glm-4.7-flash (Q4) is supposed to be OK-ish, and should fit within 24GB VRAM. It's not great, but always fun to experiment, and if the API stays down, you have some time to waste :)
As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.
Honestly, that seems okay to me. Certainly better than what AWS usually does.
It appeared there like 5 minutes ago; it was down for at least 20 before that.
That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.
It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.
Anthropic might have the best product for coding but good god the experience is awful. Random limits where you _know_ you shouldn’t hit them yet, the jankiness of their client, the service being down semi-frequently. Feels like the whole infra is built on a house of cards and badly struggles 70% of the time.
I think my $20 openai sub gets me more tokens than claude’s $100. I can’t wait until google or openai overtake them.
The entire AI hype was started because Silicon Valley wanted a new SaaS product to keep themselves afloat, notice that LLMs started getting pushed right after Silicon Valley Bank collapsed.
I've had the $20/month account for OpenAI, Google, and Anthropic for months. Anthropic consistently has more downtime and throws more errors than the other two. Claude (on the web) also has a lot of seemingly false positive errors. It will claim an error occurred but then work normally. I genuinely like Claude the best but its performance does not inspire confidence.
Was getting 500 errors from Claude Code this morning but their status page was green. So frustrating that these pages aren't automated, especially considering there are paying users affected.
I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up
This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)
What does Down mean in this context? I imagine running inference on any server suffices. Just rent AWS since Amazon owns it anyways and keep Claude running.
Us east coasters here are having a chuckle (what else can we do? we can't get work done while Claude is down... I'll be damned if I have to type in code letter by letter ever again!).
Everybody that uses it knows it is down, what value does this add? No context here either. Posts like these feels so much like a low hanging first-to-post-this for karma grab.
It's like.. Popular service is down, let me post that to hn first! Low effort but can still end up popular.
I dunno. Maybe I'm being overly critical. Thoughts?
To add something to the discussion though: this is a reminder why you should not invest in one tool, claude or otherwise. Also, don't go enhancing one of these agents, ond only one of these agents. beads spent the better part of a medium sized country in energy to create a simple TODO list and got smeared in 10 minutes once claude integrated todos in their client.
The usual impetus is that the official status pages habitually under report and sharing the status page showing the best information gives a basis to form a place to have discussion about where/how/why it was down. Unfortunately, many seem to take the other interpretation instead and said discussion often ends up being low quality in a self fulfilling loop of reasons.
The second note probably deserves to be a separate comment.
Not everyone that interacts with a service is interacting with it directly? How is this a serious question.
A thing called API’s exist and if your users rely on it but your not interacting with it directly yourself, seeing this could save you time to investigate an issue.
Or you are using it yourself and seeing this post confirms it is not just you having an issue and you can move on with your day.
This has nothing to do with it being AI and it being a large service. It is the same with posts about an Azure or AWS.
I don't use it but I'd like to know even if it's just entertainment value. But I can imgine people who intend to use it learn something from it: redundancy must be part of their strategy like you said.
So what value does it add saying "X is down" anywhere?
It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you being overly critical, yes
I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.
It's pretty rare that switching costs are THAT low in technology!
reply