This is a hot take, would love a link/explanation on why you think neural nets are human-like. For example, from the op Ed you’re likely referencing:
“ Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”
Isn’t that just plainly true of LLMs? Sure they can produce text that looks like an explanation, and sure putting in a header telling it to spell out its reasoning will get it to output text that looks like reasoning, but I feel the nature of its hallucinations make it clear that it’s not actually performing those steps anywhere in the net.
But maybe I’m just a Luddite? These technologies are amazing and transformative, I’m just shocked to see hate on HN of CHOMSKY of all people, the father of modern linguistics and cognitive science…
Why do you think this is impossible? Have you tried making an account on chat.openai.com? Because it can in fact do that right now; and even people who are otherwise skeptical of LLMs have panned Chomsky's article.
He wrote an op ed recently with the oft-repeated argument that stochastic models are fundamentally distinct from structured symbolic/logical models. In simple terms you can never really “trust” chatgpt’s answers because it’s just guessing the answer that looks right, not applying structured reasoning like humans do.
> In simple terms you can never really “trust” chatgpt’s answers because it’s just guessing the answer that looks right, not applying structured reasoning like humans do.