Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's even more bizarre is the ChatGPT is proving him right, that a neural net can build logical grammar, and he is denying it!


This is a hot take, would love a link/explanation on why you think neural nets are human-like. For example, from the op Ed you’re likely referencing:

“ Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”

Isn’t that just plainly true of LLMs? Sure they can produce text that looks like an explanation, and sure putting in a header telling it to spell out its reasoning will get it to output text that looks like reasoning, but I feel the nature of its hallucinations make it clear that it’s not actually performing those steps anywhere in the net.

But maybe I’m just a Luddite? These technologies are amazing and transformative, I’m just shocked to see hate on HN of CHOMSKY of all people, the father of modern linguistics and cognitive science…


Why do you think this is impossible? Have you tried making an account on chat.openai.com? Because it can in fact do that right now; and even people who are otherwise skeptical of LLMs have panned Chomsky's article.


What did he say about logical grammars and neural nets in the past? Sorry, not very familiar with him.


He wrote an op ed recently with the oft-repeated argument that stochastic models are fundamentally distinct from structured symbolic/logical models. In simple terms you can never really “trust” chatgpt’s answers because it’s just guessing the answer that looks right, not applying structured reasoning like humans do.

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...

But might be missing something more specific, would love to read that.


> In simple terms you can never really “trust” chatgpt’s answers because it’s just guessing the answer that looks right, not applying structured reasoning like humans do.

Here's an argument otherwise, that they really are developing some sort of model: https://the-decoder.com/stochastic-parrot-or-world-model-how...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: