[Vibhor Maloo is a 4th year B.A. LL.B. (Hons.) student at Hidayatullah National Law University, Raipur]
On June 20, 2025, the Securities and Exchange Board of India (“SEBI”) released a consultation paper providing a framework for the ethical application of artificial intelligence (“AI”) and machine learning (“ML”) in the Indian securities markets. This move signals SEBI’s growing recognition of AI/ML technologies’ transformative and potentially disruptive impact on capital markets, notably in algorithmic trading, surveillance, risk management, and investor services. The proposed regime seeks to embed innovation within a structure of ethical design, transparency, accountability, and board-level oversight. While SEBI’s initiative is praiseworthy and broadly consistent with evolving international norms, it leaves some crucial points unaddressed, particularly regarding its scope, enforcement mechanisms, allocation of liability, and support for responsible innovation. This post critically evaluates SEBI’s proposed approach, drawing on global precedents and Indian corporate governance norms to assess whether it strikes the appropriate balance between technological innovation and systemic integrity.
SEBI’s Ethical and Structural Blueprint for AI/ML Regulation
The consultation paper proposes a principles-based regulatory framework for the responsible use of AI/ML by market infrastructure institutions (“MIIs”), market intermediaries, and mutual funds, among its other stakeholders. The framework is built on six pillars: ethics, accountability, transparency, auditability, data privacy, and fairness. Notably, SEBI requires firms to create a board-approved AI Governance Framework to provide top-level accountability and institutional monitoring, paralleling board obligations under section 134(5) of the Companies Act, 2013, which mandates directors to ensure robust internal controls and compliance measures. In addition to this, periodic audits and model validations are also envisaged to detect algorithmic bias and operational irregularities.
The proposal highlights the explainability of AI/ML models and advocates for systems that stakeholders and regulators can understand. It also encourages maintaining extensive documentation for supervisory review. Human monitoring is thought to be necessary, particularly for AI systems performing crucial market functions, which are increasingly central to core market operations. At the core of SEBI’s approach is an emphasis on the importance of data integrity and privacy, thereby aligning with India’s Digital Personal Data Protection Act, 2023, and internationally with OECD’s principles on AI governance. However, while the framework is wide, it does not establish technical thresholds or tiered risk levels, nor does it distinguish between AI applications with increasing complexity or systemic implications.
As AI/ML increasingly dominate the decision-making in securities markets, the lack of a customised regulatory framework places the market ecosystem at peril of algorithmic errors, opacity, and investor harm. A stark reminder of this risk is the 2010 U.S. Flash Crash, where unmonitored interactions among high-frequency trading (“HFT”) algorithms led to a $1 trillion temporary loss in market value. Similarly, SEBI’s latest enforcement action in the OPG Securities colocation case revealed the hazards of opaque algorithmic access and systematic regulatory blind spots.
Assessing Gaps in SEBI’s Oversight Model: A Global Regulatory Outlook
SEBI’s consultation paper proposes that all regulated institutions, such as MIIs, intermediaries, and asset management businesses, implement a board-approved AI Governance Framework. This includes procedures for internal control, algorithmic audits, and model explainability. While this is an essential step toward aligning AI usage with responsible innovation principles, this strategy fails to establish clear lines of accountability for harm due to AI/ML systems.
The consultation paper remains silent on whether liability for algorithmic malfunction, biased decision-making, or investor harm lies with the developer of the AI tool, the entity deploying the tool, or the board of the regulated entity that approves its use. This ambiguity is particularly concerning in light of section 166(3) of the Companies Act, 2013, which imposes duties on directors to act with due and reasonable care, skill, and diligence. Moreover, an issue that would emerge is when a board merely rubber-stamps a complex AI system without fully understanding its risks, whether it can credibly be said to have faithfully discharge its statutory obligations. This would then create a whole set of legal and ethical implications that would further highlight the absence of clear regulatory guidance on the same, in consonance with the Companies Act.
Moreover, SEBI does not articulate any defined standard of care for boards or compliance officers when approving or supervising AI/ML systems. In contrast, the EU AI Act Proposal, 2021 explicitly imposes tiered obligations based on risk classification and designates providers and deployers with distinct roles, supporting a more calibrated and risk-sensitive governance approach. Similarly, the Monetary Authority of Singapore (MAS) under its FEAT Principles, 2018comprehensively mandates clear accountability protocols, with internal committees responsible for monitoring AI fairness, transparency, and ethics.
The consultation paper also mentions the need for “human oversight” in AI/ML decision-making, but fails to specify the nature of such oversight – whether it must be Human-In-The-Loop (HITL), Human-On-The-Loop (HOTL), or Human-In-Command (HIC). Manual oversight in HFT or automated risk surveillance is not only unfeasible but operationally incompatible owing to latency sensitivity in trades. The OECD’s AI classification framework outlines that oversight must be tailored to an AI system’s autonomy and task complexity, particularly in cases involving real-time decision-making. SEBI’s paper, however, applies a uniform oversight requirement across all AI/ML systems, overlooking the practical constraints and not distinguishing between high-risk AI systems (used for investment advising or market monitoring) and non-critical systems (like chatbots or customer service algorithms).
By using a one-size-fits-all approach, the framework risks being simultaneously overbroad and underinclusive for basic instruments and insufficient for large systems with systemic ramifications. To be operationally and legally successful, SEBI must clarify who has the legal duty, identify due diligence criteria, and create risk-based accountability tiers. This not only defines compliance requirements but also shields directors from any unnecessary post-facto enforcement risks under SEBI rules and the Companies Act.
Charting the Way Forward for a Robust and Adaptive AI Governance Framework
SEBI’s paper is an important first step toward regulating AI/ML in financial markets, but many crucial adjustments are required to get from intention to impact. First, SEBI has to define culpability and institutional accountability. The existing architecture ambiguously assigns responsibilities to developers, deployers, and board members. A specific attribution model—possibly through revisions to SEBI’s regulations or interpretative advice under the Companies Act—should establish the standard of care required of all stakeholders. Directors and compliance officials must understand what “due diligence” entails in AI supervision.
Second, the current uniform regulatory system must be replaced with a calibrated, risk-based framework. AI applications vary greatly in sophistication and influence, ranging from simple chatbots to high-frequency trading platforms. Regulations should be proportional to the risk profile and systemic significance. Third, SEBI should explore establishing procedural protections such as required algorithmic audit cycles, internal accountability systems, and adverse-event reporting methods. These operational elements will add enforceability to an otherwise principle-driven system.
Fourth, the function of human monitoring should be reconsidered. Rather than mandating humans to supervise every AI function, SEBI should develop supervision models suited to the latency, complexity, and criticality of each use case. Finally, the lack of a formal system for stakeholder interaction and feedback integration after implementation must be addressed. SEBI should form a permanent AI/ML advisory council made up of engineers, attorneys, market players, and consumer advocates to ensure ongoing refinement.
Conclusion
SEBI’s consultation paper is a commendable step towards integrating AI/ML governance into the capital market framework of India. However, for it to progress beyond aspirational principles, the framework must incorporate enforceable standards, clarify accountability, and embrace a risk-sensitive regulatory design. Absent these refinements, the framework may fall short in offering operational certainty, legal enforceability, and investor protection. By embracing calibrated supervision, institutional clarity, and stakeholder-driven adaptation, SEBI can construct a future-ready model of AI regulation. The challenge ahead lies not in resisting technological adoption but in regulating it with foresight and precision.
– Vibhor Maloo