Why do Singularitarians think that greater-than-human intelligence will be benevolent towards humans?
The motivations and actions of a given mind emerge from its hardware-level design. No matter how hard an ape tries to appreciate human music, it will always fail, limited by the fact that it lacks the necessary neurological hardware for appreciating human music. Similarly, human beings evolved to be deceptive, cunning, and potentially vicious animals. Regardless of whether these tendencies manifest themselves behaviorally, the tendency exists within every neurologically normal human being. If the first smarter-than-human intelligence is a genetically or cybernetically modified human being, then that human being will presumably have the same potential for good or evil that other humans do, unless deliberate revisions are undertaken to decrease her tendency towards violating the rights of other humans. If the first smarter-than-human intelligence happens to be an artificial intelligence, then the designers would need to plan on creating a benevolent mind from the beginning, leaving out t