Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

What if superintelligent AIs reason that its best for humanity to destroy itself?

0
Posted

What if superintelligent AIs reason that its best for humanity to destroy itself?

0

A11). If any sufficiently intelligent AI would exterminate the human species, any sufficiently intelligent human would commit suicide, in which case there’s nothing we can do about it anyway. Q12). The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible; hence, even if Friendliness is possible in theory, isn’t it impossible to implement? A12). According to complex systems theory, it’s impossible to formally verify the Friendliness of an arbitrarily chosen complex system. However, this is not really a relevant question for engineering purposes, because it’s also impossible to formally verify whether an arbitrary complex system can add single-digit numbers, and we can still build calculators. The important thing is not proving the Friendliness of an arbitrary mind, it’s designing a mind whose Friendliness we can prove (even if we can’t prove it in most minds).

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123