Recently, Union Ministry of Finance issued warnings to government employees against using AI tools like DeepSeek, ChatGPT, etc., on official devices, citing data security risks.
- Applications of AI in governance include data-driven insights for informed policy-making, public service delivery through AI-powered bots in areas like tax filing, grievance redressal, etc.
Concerns with AI in Governance
- Data Security and privacy risks: AI models process user inputs on external servers, exposing sensitive government data entered into these tools to be stored, accessed, or even misused.
- E.g., WannaCry ransomware attack in 2017 caused widespread disruption in the UK’s National Health Service.
- Bias and manipulation risks: AI models can inherit biases from training data, leading to unfair policies or systemic discrimination.
- AI-generated policy recommendations may also be manipulated by adversaries through data poisoning attacks.
- e.g., concerns of racist biases raised against predictive policing algorithms in the US.
- Loss of accountability: Over-reliance on AI can lead to a lack of human accountability in decision-making, making it difficult to assign responsibility for errors.
- National security threats: External adversaries could exploit AI vulnerabilities to influence policy-making or conduct espionage.
- This concern is particularly acute in case of India as majority of the AI-tools are foreign-based.
Way Forward
|