G. Saikumar& Intisar Aslam*

Source: PSV College
The article explores the persistent challenge of infringement of rights and AI bias, underscoring how the oversight architecture of clinical trials offers a valuable model to address this issue. The authors argue that independent, multidisciplinary ethics committees are indispensable for ensuring AI systems remain fair and aligned with constitutional values.
India’s ambitious pursuit to become a developed economy has placed the digital sector at the heart of its economic and developmental agenda for 2047. As digital technologies and artificial intelligence [“AI”] continue to deeply embed in our daily lives and governance, the collection, processing, and use of digital personal data have emerged as critical determinants of not only economic efficiency but also the protection of fundamental rights and societal trust. However, the increasing reliance on AI systems introduces significant risks, particularly the perpetuation and amplification of bias and discrimination. While such risk is not new, as evidenced by the 1988 British medical school admissions case, years down the line, the bias remains a persistent challenge without any regulatory oversight. The consequences are especially acute in India, as it stands as a textbook example of a multi-faceted society through diversity, vibrant cultures, languages, castes, religions, and socio-economic backgrounds. Against this backdrop, transparency around datasets and algorithmic processes becomes imperative, particularly when AI is deployed in contexts that affect the public at large.
This article argues that a bio-medical paradigm offers a compelling approach to tackling AI bias. The article proceeds with a three-fold aim: First, it briefly outlines the critical factors that any techno-regulatory framework must incorporate to adequately respond to the unique challenges posed by AI. Second, it argues for the setting up of an independent Ethics Committee, modelled on the regulatory structure employed in clinical trials. Lastly, the article elucidates the potential of such a committee to respond, mitigate, and eliminate algorithmic bias within AI systems in three phases: pre-development, development, and post-development.
Bias-proofing AI Systems: Critical Considerations and Regulatory Imperative
In the context of India’s ongoing digital transformation, the risks associated with bias and discrimination in AI systems have become increasingly salient. This underscores the necessity for a robust independent framework to oversee the design and process of collection, storage, sharing, dissemination, and processing of personal data – a need supplemented by the yet-to- be-enforced Digital Personal Data Protection Act, 2023 [“DPDP Act”]. The DPDP Act aims to protect the rights of citizens while striking the ideal balance between innovation and regulation, ensuring that everyone may benefit from India’s expanding innovation ecosystem and digital economy. However, at a time when AI has become the defining paradigm of the 21st century, it stresses three crucial considerations that encourage both innovation and ethical standards.
- Lawfulness, Fairness, and Transparency
Transparent rules and practices prevent latent bias and hold organizations accountable, reducing the risk of discriminatory practices. A fair, transparent, and ethical framework not only entails reduction of economic risk and harm to the reputation of organisations but also is a key essential to creating an open, long-lasting, and sustainable company of the future.
2. ‘Human in the loop’ standard
Given the risk of bias or discriminatory output inherent in the automated decision-making of AI systems, it is imperative to have a ‘human in the loop’ i.e., human intervention. This will ensure that humans provide feedback and authenticate the data during AI training and deployment, which is crucial for accuracy and for mitigating risks of bias. It may be argued that such human intervention may introduce human bias, causing a snowball effect, however, the proposed Ethics Committee enumerated in this article addresses this concern .
3. Data Security and Data Anonymisation
Robust data security and effective anonymization protect the personally identifiable information and prevent misuse, and also prevent possible bias. Allowing data principals (or subjects in case of GDPR) to correct or erase their data and making certain that processing is based on informed consent ensures a level playing field and can further minimise the risk of causing historical or systemic biases by AI systems.
A comparative analysis of the DPDP Act and the European Union’s General Data Protection Regulation (“GDPR”) reveals both convergences and gaps in respect of the above considerations to address algorithmic bias:
Principle | DPDP ACT (INDIA) | GDPR (EU) |
Lawfulness | Consent under Section 6 or ‘legitimate uses’ under Section 7 | Lawful Bases under Article 6, including legitimate interests |
Human-in-the-Loop | No explicit requirement | Right to human intervention in automated decisions under Article 22. |
Data Security | Yes. Section 8(5) mandates data fiduciaries to implement reasonable | Yes. Articles 5(1)(f) and 32 require implementation of technical and organisational |
security safeguards to ‘prevent personal data breach’ | measures to ‘protect against unauthorised or unlawful processing of personal data’ | |
Data Anonymisation | Does not refer to or exclude anonymised data. However, in light of identification being the standard for applicability of the Act, the process of anonymisation, until data is absolutely unidentifiable, shall be covered. | Processing personal data for the purpose of anonymisation is processing that must have a legal basis under Article 6 |
Right to Rectification | Yes. Section 12 grants the right to correct inaccuracies or update data | Yes. Article 16 grants the right to rectification |
Right to Erasure | Yes. Section 8(7) grants the right to erasure unless retention is necessary for compliance with law | Yes. Broader right to removal (‘right to be forgotten’) under Article 17. Subject to exceptions |
Right to Object to and Restrict Processing | Withdrawal of consent under Section 6(6) will cause the cessation of processing of personal data | Yes. Article 18 grants the right to object to processing in any instance of inaccurate data, unlawful processing, etc. |
While the DPDP Act introduces several important protections, it lacks explicit provisions for human oversight in automated decision-making, which is central to the GDPR’s approach for preventing and mitigating algorithmic bias. Unlike global counterparts such as Singapore’s Model AI Governance Framework, the EU AI Act, and the OECD AI Principles (India is not an adherent), the DPDP Act lacks a dedicated governance framework for AI, leaving further gaps in oversight and accountability. The above comparison underscores the need for India’s regulatory framework to evolve further, particularly in the context of AI governance, to ensure comprehensive protection against algorithmic bias.
Remedy: Clinical Trial Ecosystem as a Model for Data Governance
Given the fast pace of AI research and the risk of the race between innovation and obsolescence, regulatory frameworks must be both sustainable and flexible. This requires not
only an initial impact assessment but also periodic re-evaluations to address evolving real- world challenges. Such adaptive governance is exemplified in biomedical research, where Ethics Committee play a central role in overseeing clinical trials, ensuring the rights, safety, and well-being of participants, and conducting ongoing reviews.
India’s New Drugs and Clinical Trials Rules, 2019 [“CT Rules”] can serve as a valuable regulatory model under the DPDP Act for reducing AI-based discrimination. In this model, pharmaceutical companies seeking to conduct trials must obtain approval from the Drug Controller General of India [Rule 22] and an institutional or independent Ethics Committee [Rule 25]. These committees, comprising diverse experts and community representatives, oversee the entire trial process, review protocols, ensure informed consent, and monitor for adverse events [Rule 7]. The clinical trial agreement (“CTA”) details the roles, responsibilities, and liabilities of all parties, and the Ethics Committee acts as a safeguard for the rights of the participants and public interest.
A similar approach can be adopted for the collection, storage, and processing of personal data in the digital landscape of India. Independent Ethics Committee – constituted outside direct government control – could oversee specific sectors such as procurement platforms, social media, and healthcare. The composition and appointment of the Ethics Committee would vary from the existing Data Protection Board in terms of (no) involvement of the Central Government. For composition, it can include experts in AI, law, ethics, and relevant technical domains, ensuring a balanced and unbiased approach to oversight. The Ethics Committee can further mirror the balanced composition of the CT Rules, with 50% external members. Furthermore, the Committee can guide data fiduciaries to ensure compliance with applicable laws and ethical norms. It can also serve as the first point of contact for person(s) seeking remedies, and could recommend actions to the Data Protection Board in cases of non- compliance.
Role of Ethics Committee in Eliminating Discrimination by AI Systems
During the pre-development phase, i.e., before AI systems are built, the Ethics Committee can conduct rigorous risk assessments to pre-empt bias. It can audit training datasets for representativeness, ensuring marginalized groups are not underrepresented – a common pitfall in facial recognition or hiring algorithms. Tools like IBM’s AI “Fairness 360” are employed to analyse decision boundaries for discriminatory patterns. The committee can engage in techniques like reweighting datasets or adversarial debiasing to correct imbalances. For example, an Ethics Committee scrutinising a ‘loan-approval AI’ might require developers to exclude postal codes to obviate socio-economic discrimination.
In the development phase, the Committee can devise a second layer of verification to minimise the probability of biased outcomes by AI systems. The second layer can involve human intervention through human-in-the-loop standards, where humans play a major role in authorising decisions involving high-stakes. While AI is overseen by humans, to balance the same, explainable AI (XAI) can ensure that the stakeholders understand the methods of decision-making, thereby catering to transparent processing. Another method of bias minimisation is to incorporate inputs from a multidisciplinary perspective, i.e., sociologists, ethicists, and community stakeholders, which increases the probability of reducing bias due to ingrained presumptions, such as gendered language in resume-screening tools.
By the post-deployment phase, the AI systems may develop, learn, or experience a shift towards bias with the passage of time. Consequently, the outputs might become marred by bias. To address the same, the Ethics Committee can supervise ongoing audits to detect such a drift in bias. The supervision may include the engagement of metrics like measuring the disparities in outcomes amongst a certain group of people. In such scenarios, the Committee can take or instruct corrective measures to update the datasets on which the specific AI system is trained or retrain the model altogether. It may be argued that re-training the AI model by developers may incur an economic burden, however, such a measure would prevent dispute and litigation and, in the long run, be cost-effective to the concerned developer.
Conclusion
The Ethics Committee architecture from clinical trials offers a valuable and viable solution to develop a regulatory framework that encourages innovation while safeguarding the rights of stakeholders. Furthermore, such a Committee can complement the existing Data Protection Board, adding AI-specific expertise and independent oversight. This will build public trust and acceptance of AI systems. Thus, where a CT Rules Ethics Committee are tasked with the responsibility of protecting the rights, safety and well-being of trial participants of reviewed and approved trial protocol per international standards, the DPDP Act Ethics Committee can help implement an AI system trained, developed and deployed with minimal to no-bias datasets in consonance with the human right to equality. This step would give further effect to India’s vision to become not only a leader in responsible AI governance but also a developed nation by 2047.
As with clinical trials, where Ethics Committees have attempted to achieve an equilibrium between scientific advancement and human safety, a data protection Ethics Committee could help steer the rocky terrain where personal data, artificial intelligence, and fundamental rights intersect.
*Mr. G. Saikumar, B.E.,LLB is a Senior Advocate at the Supreme Court of India. He has served on various institutional ethics committees, overseeing clinical trials and ethical standards in medical research. Beyond his legal practice, Mr. Saikumar has served as legal advisor for the Indian Red Cross Society and the International Federation of Red Cross and Red Crescent Societies for South Asia, formulating legal frameworks for disaster response and humanitarian efforts.
*Intisar Aslam is a fourth-year student pursuing a BA LLB (Hons.) at the National University of Study and Research in Law, Ranchi. He has assisted professors at several foreign universities, including the Queen Mary University of London and the National University of Singapore.