Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You misunderstood what I was saying. I know the chatbot itself is not structured as we are. I’m saying that our reactions to them are the standard tools of mind-building that we apply to our own kids (and pets).


If I understand you, you're saying that we see patterns of intelligence or understanding in ML models in the same way we see patterns of intelligence or understanding in children or animals?

If so, I agree. I think that's our big flaw, in fact, because we instinctually apply patterns from birth, even when those patterns shouldn't be applied. So we see faces in the moon or on mars that aren't there. We see shapes moving in the dark that don't exist. And we seem to believe that ML models will develop over time as children or animals do, based on nothing more than our perceptions of similarity, our instinct to apply patterns even when we shouldn't.

Unlike a baby human, that ML model isn't going to develop increased complexity of thought over time. It's already maxed out. New models might up the complexity slightly, but that baby is going to vastly surpass any existing model in weeks or days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: