What if superintelligent AIs reason that its best for humanity to destroy itself?
A11). If any sufficiently intelligent AI would exterminate the human species, any sufficiently intelligent human would commit suicide, in which case there’s nothing we can do about it anyway. Q12). The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible; hence, even if Friendliness is possible in theory, isn’t it impossible to implement? A12). According to complex systems theory, it’s impossible to formally verify the Friendliness of an arbitrarily chosen complex system. However, this is not really a relevant question for engineering purposes, because it’s also impossible to formally verify whether an arbitrary complex system can add single-digit numbers, and we can still build calculators. The important thing is not proving the Friendliness of an arbitrary mind, it’s designing a mind whose Friendliness we can prove (even if we can’t prove it in most minds).