Securities and Exchange Board of India (Sebi) on Cybersecurity Risks from AI
The Securities and Exchange Board of India (Sebi) has issued a warning regarding emerging cybersecurity risks from advanced artificial intelligence (AI) tools. These tools, which can detect system vulnerabilities, pose significant threats to the security of regulated entities.
Key Concerns
- AI-driven vulnerability identification tools, similar to "Mythos," are evolving rapidly and increasing the risk of exploitation.
- Concerns have been raised about data confidentiality, application integrity, and the reliability of outputs from these tools.
- The interconnected nature of the securities market means vulnerabilities in one area can have cascading effects.
Measures and Strategies
To tackle these emerging threats, Sebi has undertaken several initiatives:
- Formation of a task force named Cyber Suraksha AI, including representatives from market infrastructure institutions and other stakeholders.
- The task force will assess AI model risks, develop mitigation strategies, facilitate threat intelligence sharing, and review third-party service providers' cyber posture.
- Issuance of a detailed advisory outlining immediate to medium-term measures, such as:
- Patching systems promptly.
- Conducting regular vulnerability assessments with AI tools.
- Strengthening API security and enhancing monitoring via security operations centers.
Frameworks and Directives
- Sebi encourages entities to expedite onboarding to the Market-SOC framework established by exchanges.
- Entities must ensure continuous risk assessments, including AI-related scenarios.
- Adoption of measures like zero-trust architecture and system hardening is recommended to reduce attack surfaces.
- Regulated entities should engage with vendors for timely updates and develop long-term AI strategies for threat detection and mitigation.