Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s just irrelevant for most users. These companies are getting more adoption than they can handle, no matter how clunky their desktop apps are. They’re optimizing for experimentation. Not performance.




While this may be true for casual users, for dev native products like Codex, the desktop experience actually matters a lot. When you are living in the tool for hours, latency, keyboard handling, file system access, and OS-level integration stop being “nice to have” and start affecting real productivity. web or Electron apps are fine for experimentation, but they hit a ceiling fast for serious workflows -- especially if the icp is mostly technical users

VSCode is arguably one of the most if not the most popular code editor these days…

And they're pretty much the only example of an embedded browser architecture actually performing tolerably and integrating well with the native environment.

Still good enough for the majority of the users.

Fair, I think I'm certainly in the minority. Especially now more then ever with an increasing amount of non-technical people exploring vibe coding, 'good enough' really is good enough for most users

[flagged]


Well unfortunately, that’s just how I write. None of my posts are LLM-generated, so I'm sorry they come across that way.

Apologies.

It's not irrelevant for developers neither for users. Tiktok has shown that users deeply care about the experience and they'll flock en-masse to something that has a good experience.

The experience in the claude app is fine.

More adoption? I don't think so... It feels to me that these models && tools are getting more verbose/consuming more tokens to compensate for a decrease in usage. I know my usage of these tools has fallen off a cliff as it become glaringly obvious they're useful in very limited scopes.

I think most people start off overusing these tools, then they find the few small things that genuinely improve their workflows which tend to be isolated and small tasks.

Moltbot et al, to me, seems like a psyop by these companies to get token consumption back to levels that justify the investments they need. The clock is ticking, they need more money.

I'd put my money on token prices doubling to tripling over the next 12-24 months.


> I'd put my money on token prices doubling to tripling over the next 12-24 months.

Chinese open weights models make this completely infeasible.


What do weights have to do with how much it costs to run inference? Inference is heavily subsidized, the economics of it don't make any sense.

Anthropic and OpenAI could open source their models and it wouldn't make it any cheaper to run those models.. You still need $500k in GPUs and a boatload of electricity to serve like 3 concurrent sessions at a decent tok/ps.

There are no open source models, Chinese or otherwise that are going to be able to be run profitably and give you productivity gains comparable to a foundation model. No matter what, running LLMs is expensive and the capex required per tok/ps is only increasing, and the models are only getting more compute intensive.

The hardware market literally has to crash for this to make any sense from a profitability standpoint, and I don't see that happening, therefor prices have to go up. You can't just lose billions year after year forever. None of this makes sense to me. This is simple math but everyone is literally delusional atm.


Open weights means that the current prices for inference of Chinese models are indicative of their cost to run because.

https://openrouter.ai/moonshotai/kimi-k2.5

It's a fantasy to believe that every single one of these 8 providers is serving at incredibly subsidized dumping prices 50% below cost and once that runs out suddenly you'll pay double for 1M of tokens for this model. It's incredibly competitive with Sonnet 4.5 for coding at 20% of the token price.

I encourage you to become more familiar with the market and stop overextrapolating purely based on rumored OpenAI numbers.


I'm not making any guesses, I happen to know for a fact what it costs. Please go try to sell inference and compete on price. You actually have no clue what you're talking about. I knew when I sent that response I was going to get "but Kimi!"

The numbers you stated sound off ($500k capex + electricity per 3 concurrent requests?). Especially now that the frontier has moved to ultra sparse MoE architectures. I’ve also read a couple of commodity inference providers claiming that their unit economics are profitable.

You're delusional, I didn't even include the labor the install and run the damn thing. More than 500k

Okay, so you are claiming "every single one of those 8 providers, along with all others who don't serve openrouter but are at similar price points, are subsidizing by more than 50%".

That's an incredibly bold claim that would need quite a bit of evidence, and just waving "$500k in gpus" isn't it. Especially when individuals are reporting more than enough tps at native int4 with <$80k setups, without any of the scaling benefits that commercial inference providers have.


Imagine thinking that $80k setups to run Kimi and serve a single user session is evidence that inference providers are running at cost, or even close to it. Or that this fact is some sort of proof that token pricing will come down. All you one-shotted llm dependents said the same thing about Deepseek.

I know you need to cope because your competency is 1:1 correlated to the quality and quantity of tokens you can afford, so have fun with your Think for me SaaS while you can afford it. You have no clue the amount of engineering that goes into provide inference at scale. I wasn't even including the cost of labor.


It really is insane how far it's gone. All of the subsidization and free usage is deeply anticompetitive, and it is only a profitable decision if they can recoup all the losses. It's either a bubble and everything will crash, or within a few years once the supplier market settles, they will eventually start engaging in cartel-like behavior and ratchet up the price level to turn on the profits.

I suspect making the models more verbose is also a source of inflation. You’d expect an advanced model to nail down the problem succinctly, rather than spawning a swarm of agents that brute force something resembling an answer. Biggest scam ever.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: