The paper reviews the safe use of AI in the judiciary, outlines key ethical challenges, offers recommendations, and draws on international case studies to illustrate emerging risks.
Key Highlights of the Report
Risks and Ethical Challenges of AI (Artificial Intelligence)
- Overreliance and Loss of Human Judgement: It can weaken judicial discretion and the opaque nature of models reduces accountability.
- Hallucinations and Fabricated Content: It may produce false information or non-existent citations. E.g.US case “Roberto Mata v. Avianca and Coomer v. Lindell”.
- Algorithmic Bias: E.g. The US COMPAS tool, challenged in State v. Loomis showed potential racial bias.
- Others: Deepfakes and Evidence Manipulation, Privacy and Confidentiality Risks, Intellectual Property Concerns etc.
Key Recommendations
- Create AI Ethics Committees: Courts should establish bodies with technical and legal experts to review AI tools and set deployment standards.
- Prefer Secure In-House AI Systems: Developing internal tools reduces confidentiality, security and data-exposure risks.
- Adopt a Formal Ethical AI Policy: A clear framework must define authorised uses, responsibilities and accountability mechanisms.
- Others: Mandate Disclosure and Audit Trails, Provide Comprehensive Training etc.
Key Initiatives
|