Governance of Military AI
India abstained from signing a pledge to govern AI in warfare at the REAIM summit, highlighting the need for prioritizing military AI regulation due to its national security implications.
- About one-third of countries signed the 'Pathways to Action' declaration; notable absentees include the U.S., India, and China.
- Decline in signatories underscores challenges in military AI governance.
Challenges in Governing Military AI
- AI is a dual-use technology, complicating compliance verification.
- Perceived military advantages discourage regulation, with significant investments made in AI for civilian and military use.
- Lethal Autonomous Weapons Systems (LAWS) remain controversial, with no international consensus on definitions or regulations.
India's Position
India's stance on military AI reflects its economic and security priorities, emphasizing responsible use but refraining from binding commitments.
- India calls a binding instrument on LAWS "premature" due to limited known usage in military AI.
- Moral arguments for bans are weak; a non-binding mechanism is proposed for transparency and safety.
Proposed Provisions for AI Governance
- Exclusion of AI in decision-making for nuclear forces.
- Voluntary confidence-building mechanisms for sharing AI development data.
- Creation of an accepted risk hierarchy for military AI use cases.
Conclusion
India should advocate for a non-binding framework to ensure accountability and safety. As AI progresses, establishing norms may lead to a legally binding framework.