Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> speculate on what it would enable make it honestly sound kind of unimaginative

There seems to be a weird mental block where it seems inconceivable to consider that we humans might be able to create an intelligence that exceeds ours -- despite plenty of evidence that we have in specific cases. There's an understated desire that whatever we build we serve us and thus must forever be "lesser".

If dogs are Intelligence Level .3, we describe ourselves as Intelligence Level 1.0, and even if we create an artificial intelligence that's a clear 1.1 it must be a .5 in self-agency.

I have yet to read anybody considering, on a very deep level, what Intelligence 2.0, 10.0, 100.0 and so on might be. You get the occasional pop speculation like the "Culture" series or the movie "Her". Most of the time you just get 1.0 (but faster), or 1.0 (but many).

A 10.0 would simply replace the entire healthcare system with something else, not just be a chatbot. Imagine the inanity of your 1.0 chatbot talking to the insurance company's 1.0 chatbot? What's the point of this stupidity?

A 100.0 would probably just get to the root cause and cure disease, then establish a social order that figures out what to do with all these pesky immortals.

Maybe we do end up in "the Culture" after all.

afterthought: "The Golden Oecumene" series by John C. Wright seems to really attempt to explore a post 1.0 AI world that's not yet post-scarcity. The antagonist in the series is another >1.0 super intelligence capable of taking on Earth's own superintelligences.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: