According to CNMO, Yuval Harari, the author of "A Brief History of Mankind", recently talked about the topic of superintelligence in an interview with domestic media. It believes that super-intelligent AI is difficult to control, and even control humans.
Yuvar Harari
For the question "Suppose AI becomes conscious or generates desires, will humans still be able to control it?" Harari said that humans can't control superintelligence, and once it becomes superintelligent, it's game over. No species with lower intelligence can control a species with higher intelligence for a long time. He argues that humans can set all kinds of limits on AI, but if it is smarter than us, it will find ways to circumvent it, or convince us to change the limits. In the world of cybersecurity, the weakest link is always humans. You can create the perfect cyber security system, but the enemy can control a human to bypass all defenses. Super-intelligent AI will eventually dominate humans.
How to avoid such problems? Harari believes that humanity cannot foresee all potential future developments and the development of AI itself. How do you do this in a few years for a super-intelligent non-organism that thinks and acts in a completely different way than ours? This is a huge problem because you can't foresee everything...... People say that there will be superintelligence in 2030 years, but we can't solve the philosophical problem of making constitutional rules for AI 0 years ago.
In Harari's opinion, the best thing to do is to slow down. Rather than trying to design AI, we should build a relationship where we can learn from each other in co-evolution and have enough time to correct mistakes before the arrival of artificial general intelligence. However, Harari noted, we are caught in an AI arms race where we will quickly create super-intelligent AIs that cannot control and restrain them. In the end, it is not the Americans or the Chinese who rule the world, but AI.