Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So essentially what happens if your AI turns out to be like Dexter? I suspect that in early forms that might be almost MORE likely since the character (and the theory he's based on) is essentially human with a "bug" in dealing with morality. Which leads to another point, you determine your self aware AI is likely to commit murder based on a debug session. Are you allowed to wipe it without having a trial first?


Empathy is a very specific trait that evolved in humans, it would be unlikely that the first AI's would have it, and if they did, it wouldn't likely be exactly the same as the version humans have. I expect the first AIs to be psychopathic/amoral, or else have an entirely different moral system than our own. The first is scary enough, the second could lead to very disturbing dystopias.

For example, the AI force feeds everyone happy pills to maximize happiness. Or kills everyone to stop anyone from ever suffering again. Or maybe it values lots of beings and so forces us to reproduce as much as possible. All sorts of disturbing worlds are possible if the AI doesn't have exactly the same values we have. And we don't even know what our own values are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: