Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I literally got this problem earlier today on ChatGPT, which claims to be based on o4-mini. So no, does not sound like it's just a problem with Claude or older GPTs.

And on "prompting", I think this is a point of friction between LLM boosters and haters. To the uninitiated, most AI hype sounds like "it's amazing magic!! just ask it to do whatever you want and it works!!" When they try it and it's less than magic, hearing "you're prompting it wrong" seems more like a circular justification of a cult follower than advice.

I understand that it's not - that, genuinely, it takes some experience to learn how to "prompt good" and use LLMs effectively. I buy that. But some more specific advice would be helpful. Cause as is, it sounds more like "LLMs are magic!! didn't work for you? oh, you must be holding it wrong, cause I know they infallibly work magic".



> I understand that it's not - that, genuinely, it takes some experience to learn how to "prompt good" and use LLMs effectively

I don't buy it this at all.

At best "learning to prompt" is just hitting the slot machine over and over until you get something close to what you want, which is not a skill. This is what I see when people "have a conversation with the LLM"

At worst you are a victim of sunk cost fallacy, believing that because you spent time on a thing that you have developed a skill for this thing that really has no skill involved. As a result you are deluding yourself into thinking that the output is better.. not because it actually is, but because you spent time on it so it must be




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: