If an FAI does what we would want if we were less selfish, won it kill us all in the process of extracting resources to colonize space as quickly as possible to prevent astronomical waste?
A15). We wouldn’t want the FAI to kill us all to gather natural resources. We generally assign little utility having a big pile of resources and no complex, intelligent life. Q16). What if ethics are subjective, not objective? Then, no truly Friendly AI could be built. A16). If ethics are subjective, we can still build a Friendly AI: we just need to program in our collective (human-derived) morality, not some external objective morality.