Understanding AI's Role in Strategic Affairs
The discourse surrounding the potential of an artificial intelligence (AI) arms race has gained momentum, with concerns pivoting around the development of Artificial General Intelligence (AGI). AGI refers to AI that surpasses human cognitive abilities and can solve problems beyond its training.
Current Scholarship and Contributions
Despite growing discussions on AI's evolving capabilities, there is a significant gap in understanding its impact on strategic affairs. A noteworthy contribution to this dialogue comes from a paper by Eric Schmidt, Dan Hendrycks, and Alexandr Wang, although some analyses within it are deemed inadequate.
Debates and Analogies
- There is an ongoing debate on the plausibility of AGI and how states should prepare for potential security threats.
- The concept of AI non-proliferation aims to prevent dangerous tech from reaching malevolent actors.
- The paper introduces Mutual Assured AI Malfunction (MAIM), akin to the nuclear concept of Mutual Assured Destruction (MAD), but this comparison is flawed due to AI's diffused nature.
Challenges and Strategic Considerations
- Destroying AI projects, especially those by rogue states or groups, poses risks of unintended escalation and lacks feasibility due to AI's distributed infrastructure.
- Proposals like controlling AI chip distribution face challenges given AI's non-reliance on ongoing physical resources compared to nuclear materials.
Cautions and Future Directions
- Assumptions about AI leading to bioweapons or cyberattacks are conjectural and may not justify treating AI as a weapon of mass destruction.
- The assumption that state-driven AI development is speculative, given the private sector's leading role.
- Historical comparisons, like those with nuclear tech, might not suit AI due to its unique development and deployment processes.
- The General Purpose Technology (GPT) framework could provide a better analogy for understanding AI's impact across sectors.