Why in the news?
Maharashtra launched MahaCrimeOS AI to strengthen its response to rising cybercrime using artificial intelligence, highlighting increasing use of AI in law enforcement in India.
About MahaCrimeOS AI
- It is an advanced AI co-pilot system, developed by CyberEye, a Partner of Microsoft, with the Maharashtra Government's Special Purpose Vehicle MARVEL, and Microsoft India Development Center (IDC).
- Function: It will automate execution & analysis; enable instant case creation; auto-generate case diaries & reports; suggest adaptive Investigation Paths; and create Person of Interest Profiling.
Applications of AI in law enforcement
- Predictive Policing: AI models analyze crime patterns, high-risk areas, and criminal behavior, enabling law enforcement to take proactive measures.
- E.g., USA's Clearview AI enables faster threat detection and prevention in child exploitation cases.
- Surveillance and Investigation:
- Automated drones for crime scene monitoring and suspect tracking.
- Facial recognition systems integrated with national criminal databases and forensic analysis to examine evidence and digital crime trails.
- Automated number-plate recognition (ANPR) to identify vehicles based on color, make, and specific driving patterns.
- FIR Filing and Judicial Proceedings: AI-driven speech-to-text tools assist in real-time FIR filing and case documentation, also improving witness testimony analysis and courtroom evidence evaluation.
- Data-Driven Crime Tracking and Intelligence Systems: AI enhances Crime and Criminal Tracking Network Systems (CCTNS) by integrating e-Prisons and e-Forensics databases.
- Complex Data Analysis: Such as phone logs, financial trails, and digital evidence to link international cases and spot patterns in real time.
- E.g., French and Dutch law enforcement agencies and Europol dismantled EncroChat network (platform used by criminals), where AI helped process over 115 million criminal chats.
- Detection of Digital Threats: AI is essential in identifying deepfakes (manipulated media), morphed content, and fraudulent phishing links that are used for misinformation or harassment.
- Hotspot Forecasting: By analyzing historical crime records and environmental factors (like weather or post times), AI can generate heat maps that can tell police where and when to target patrols or crowd control management.
Issues associated with AI in law enforcement
- Privacy and Mass Surveillance: The pervasive use of AI-enabled sensors and cameras risks creating a state of permanent surveillance having a "chilling effect" on fundamental freedoms.
- Puttaswamy judgment (2017) established the Right to Privacy as a fundamental right under India's Constitution (Article 21).
- Algorithmic Bias/Discrimination: AI systems trained on historical data, which often reflects past prejudices, can perpetuate and amplify societal biases.
- E.g., Racial and gender biases in technologies like facial recognition, often show higher error rates for ethnic minorities.
- Low Transparency: Many advanced AI systems, particularly those using deep learning or neural networks, are considered "black boxes" because their inner logic is too complex for humans to understand.
- E.g., Netherlands' Toeslagenaffaire (childcare benefits scandal), where an AI system wrongfully accused families of fraudulently claiming childcare.
- Automation Bias: There is a strong human tendency to over-rely on AI outputs, assuming they are neutral and error-free, leading to officers ignoring their own professional instincts.
- Scalability of Errors: Unlike human errors, which are usually isolated, a flawed AI system can replicate mistakes at massive scale and speed.
- E.g., Australia's Robodebt scheme used an automated "income averaging" algorithm to issue 470,000 incorrect debt notices.
- Other issues: Absence of a dedicated AI law in India; concerns regarding Fair trial based on AI analyzed evidence; concerns regarding data security of large databases etc.
Initiatives to integrate AI in Law enforcement
- India
- National Crime Records Bureau (NCRB): Authorized to implement the Automated Facial Recognition System (AFRS).
- FakeCheck software: Developed by IT Ministry, through the Centre for Development of Advanced Computing (C-DAC) to detect and flag manipulated videos and images (deepfakes).
- Bhashini Platform: Under the National Language Translation Mission to aid the processing of multilingual criminal data and the development of voice assistants for police.
- Safe City Project: By Delhi Police plans to install 10,000 AI–enabled cameras equipped with facial recognition and distress-detection technologies by 2026, to alert police control rooms.
- Andhra Pradesh's e-Pragati platform: It integrates information across various government departments to support broader law enforcement and administrative efficiency.
- Bengaluru Police deployed an AI system: To monitor live CCTV feeds during festivals, successfully identifying over 2,000 violations of firecracker bans by detecting flashes and smoke.
- World
- Europol's Tool Repository (ETR): A participatory platform where law enforcement agencies across Europe share software and AI tools for rescuing victims of human trafficking.
- Crime Anticipation System (Netherlands): A weekly analytical tool that uses local data and environmental factors to predict crime hotspots, allowing for efficient resource allocation.
- RADAR-iTE (Germany): A standardized algorithmic tool used by police to evaluate the risk of violent acts by known individuals within the militant-Salafist spectrum.
- INTERPOL Artificial Intelligence Toolkit: Developed with United Nations Interregional Crime and Justice Research Institute (UNICRI) to help law enforcement agencies address the most pressing challenges when it comes to the use of AI.
- INTERPOL Responsible AI Lab (I-RAIL): It aims to be a focal point for all member countries on matters related to the responsible use of AI in policing.
Way forward
- Essential Legal and Ethical Guardrails: Formal statute to govern the Automated Facial Recognition System (AFRS) is needed to ensure it does not infringe on constitutional rights to privacy and maintain public trust.
- Human Rights Impact Assessments: India should establish a legal framework to conduct human rights assessments before any AI system is deployed; ensuring users (agencies) are literate in the system's limitations.
- Human-in-the-Loop Requirements: Ensuring that AI risk scores never become the determinative factor in arrests or sentencing; human oversight must remain mandatory at every stage.
- Capacity Building and Knowledge Sharing: The National Cyber Crime Research & Innovation Centre (NCR&IC) along with developing SoPs can train law enforcement personnel in AI adoption effectively.
- International cooperation: Invest in international exchange and joint projects and leverage on international organizations such as INTERPOL.
Conclusion
The deployment of AI systems like MahaCrimeOS AI reflects India's move towards data-driven, faster, and more effective law enforcement, particularly in tackling complex cyber and digital crimes. However, ensuring strong legal safeguards, transparency, and human oversight is essential to balance technological efficiency with privacy, fairness, and constitutional rights.