News and developments
AI in the Indian Insurance Market: Regulatory Preparedness
Introduction
For some time now, there has been a growing impetus towards establishing a clear regulatory approach and governance mechanism for artificial intelligence in India (AI).
Early groundwork was laid by the NITI Aayog’s discussion papers on “National Strategy for AI” in 2018 and “Principles of Responsible AI” in 2021. Thereafter, on 7 March 2021, the Government of India approved the IndiaAI Mission, which acknowledged the technology’s transformative potential as well as the need for responsible development and deployment. In November 2023, the Ministry of Electronics and Information Technology (MeitY) constituted a sub-committee under this IndiaAI Mission to offer actionable recommendations.
We note that these developments are connected to a wider legislative effort to replace the Information Technology Act of 2000 and modernise regulatory frameworks governing India’s digital ecosystem, by way of the Digital India Act[[1]] proposed in March 2023 (Digital India Act). The Digital India Act aims to create an umbrella framework for emerging technologies such as blockchain, internet-of-things, machine learning and AI, and introduce stricter measures to address accountability and cybersecurity concerns.
Sectoral Norms on AI
Some sector-specific guidance has also been issued. For instance, in November 2024, the Securities and Exchange Board of India (SEBI) proposed amendments to various regulations aiming to hold registered entities liable to investors for any consequences arising from their use §AI tools[[2]]. In July 2023, the Telecom Regulatory Authority of India (TRAI) had recommended establishing an independent statutory authority “immediately”[[3]] to ensure development of responsible AI and regulation of various use cases in India. In June 2023, the Indian Council of Medical Research (ICMR) had also issued ethical guidelines for AI in biomedical research and healthcare.
AI Framework Report
More recently, on 7 January 2025, the MeitY’s sub-committee published its report on “AI Governance and Guidelines Development” (Report) for stakeholder comments[[4]]. This Report attempts to identify certain key issues, analyse gaps within existing frameworks, and propose recommendations for developing an AI regulatory framework.
This article discusses the key developments in AI governance in India, with a particular focus on the recent sub-committee report of January 2025 and its potential relevance to the insurance industry.
Key Concerns and Recommendations
The Report highlights certain key areas where existing legal frameworks fall short and suggests various actionable recommendations for development of an AI governance framework. These are discussed below, with particular reference to the potential impact on the Indian insurance sector.
1. AI-led Bias and Discrimination:
The Report notes that while existing laws in India do address discrimination and bias, AI systems have the potential to reinforce and amplify existing societal biases, which may be present in its training data, on a large scale[[5]]. Depending on the scale of its deployment across industries, it can become more challenging to detect and address discrimination despite the presence of existing legal protections[[6]]. The Report thus identifies the current lack of transparency in how AI systems arrive at their outputs as a key risk of AI, which coupled with unreliable or unrepresentative datasets, can lead to deployers of AI systems unknowingly using biased tools that violate existing laws.
This is particularly pertinent to the insurance sector where underwriting processes and claims decisions are already seeing AI assistance. If an AI model used for these purposes is trained on data containing any historical biases, its outputs may be discriminatory to certain demographic groups.
In terms of recommendations to address this and generally ascertain liability, the Report suggests that “traceability” and “transparency” may be mandated for AI systems to ensure that the decision-making processes are interpretable and auditable, particularly for high-risk systems[[7]]. This means being able to trace the problem back to its source (traceability) to understand: (i) why the AI made a particular decision (explainability), and (ii) who is responsible (transparency). However, practically, it is relevant to consider that most AI systems are based on highly complex neural networks and effectively act as “blackboxes”, since their calculations are not easily translated into human-understandable explanations. The fundamental concerns that arise for regulators are then, broadly:
Thus, a key challenge for regulators and the AI industry would be to navigate such inherent trade-offs and develop a balanced approach.
2. Malicious Content and Deepfakes
Legal frameworks for malicious AI-generated content already exist in India, such as under the IT Act, the Indian Penal Code (now the Bharatiya Nyaya Sanhita 2024), and potentially other laws such as the Prevention of Children from Sexual Offences Act 2012[[8]]. These laws can be used to address offences such as cheating by impersonation, identity theft, and publication of obscene content, and the IT Act framework also mandates intermediaries to remove content reported as impersonation in electronic form, including artificially morphed images, within 24 hours of a complaint[[9]]. Despite the legal provisions, the rapid advancement and increasing sophistication of AI models mean that existing laws may require adaptation in terms of their application and enforcement[[10]].
For insurance companies, the proliferation of deepfakes presents several potential risks. These include:
To address these gaps, the Report strongly recommends leveraging techno-legal measures, which include technological solutions such as (i) watermarking[[11]] and (ii) content provenance chains (such as those used in blockchains ie, digital methods to track the creation, modifications and ownership of documents, such as those used in blockchains) to establish the origin and track the lifecycle of digital content[[12]]. These measures are suggested to aid in tracing the origin of AI generated content and determining if they were created in violation of the law. In March 2024, the MeitY also issued an advisory to platforms requiring measures for identifying and marking misinformation and AI-generated content[[13]].
Insurance Companies in India may also need to prepare for the rampant use of such malicious content by implementing relevant techno-legal measures, as well as complying with any regulatory requirements that may be imposed.
3. Intellectual Property Rights
The Report highlights significant gaps and uncertainties within the existing IPR framework in India:
At present, the Report does not discuss the arguments that are typically advanced by AI companies such as the non-expressive use of copyrighted work (ie, the use ideas or facts in a copyrighted work are not themselves protected by IPR laws) or the significant transformation that an AI-generated output may go through (which may, in isolation, be compliant with copyright laws). Clarity on this is widely expected to be laid down by the Delhi High Court, where is hearing the on-going dispute between the Indian news agency, ANI and the creators of ChatGPT, OpenAI[[17]].
4. Cybersecurity
While existing laws apply to AI related cybersecurity incidents, and sectoral regulators such as the SEBI and IRDAI have also issued circulars and guidelines on cybersecurity for entities within their purview, the Report acknowledges that AI-related cybersecurity threats may require stricter compliance and enforcement by regulatory bodies[[18]].
For the insurance sector, cybersecurity is already a critical concern due to the vast amounts of sensitive personal and financial data they handle. The increasing use of AI in various aspects of their operations, such as data analysis, customer service, and even fraud detection, introduces new avenues for cyber risks. Any AI systems that Insurance Companies use may themselves be targets for cyberattacks, potentially leading to data breaches, manipulation of decision-making processes, or denial of service.
5. Assigning Liability:
In terms of liability, the Report suggests that AI developers as well as deployers of such systems (such as Insurance Companies) may bear responsibility for errors or harm caused by AI systems[[19]] and not be able to claim any safe harbour protection: “One of the conditions for availing safe harbour is that the intermediary does not “select or modify the content”. In case [of] AI models, this condition would not be met in many scenarios.” [[20]]
This responsibility could mean Insurance Companies may be held liable for consequences arising from AI-driven advice or decisions that impact stakeholders. This is also echoed in SEBI’s recent proposal concerning the use of AI by regulated entities in the financial market, which underscore the responsibility of deployers for the outcomes of their AI systems. The insurance sector, being regulated by the IRDAI, could see these overarching principles of accountability being applied to the insurance companies and intermediaries/TPAs as well.
Other Recommendations in the Report
Concluding Remarks
This Report follows similar developments in the United States[[27]], United Kingdom[[28]], and European Union[[29]], where governments have proposed/implemented comparable AI governance frameworks. While the Report acknowledges that overbearing regulation could stifle the organic evolution of India’s AI ecosystem[[30]], there is also a recognition that AI may pose novel risks that existing laws may be ill-equipped to handle, and which could concern the fundamental rights enshrined in the Indian constitution as well as have a potential for irreversible harm.
The Report lays down the foundation for India’s AI regulation path, but at present, there is no indicative timeline for adopting its recommendations or even the issuance of related frameworks such as the proposed Digital India Act.
Due to the increasing usage of and dependency upon AI systems across all financial sectors, entities in the insurance space should look to review their existing AI models based on the principles identified in the Report, align any handling of personal data by such AI systems with the Digital Personal Data Protection Act 2023, and potentially even build internal capacity for expectations such as traceability and algorithmic audits.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
For further information on this topic please contact Tuli & Co
T: +91 11 4593 4000, E: [email protected], or W: www.tuli.co.in
[[1]] While the draft bill is yet to be published for public consultation, the former MoS Rajeev Chandrashekhar organised a pre-draft consultation on 9 March 2023 where the Digital India Act’s broad structure and goals were presented in an official announcement.
[[2]] SEBI’s consultation paper on “Proposed amendments with respect to assigning responsibility for the use of Artificial Intelligence Tools by Market Infrastructure Institutions, Registered Intermediaries and other persons regulated by SEBI” of 13 November 2024.
[[3]] TRAI, Information Note to the Press (Press Release No. 62/2023) of 20 July 2023, available at https://www.trai.gov.in/sites/default/files/2024-08/PR_No.62of2023.pdf.
[[4]] IndiaAI, “Report on AI governance guidelines development”, available at https://indiaai.gov.in/article/report-on-ai-governance-guidelines-development.
[[5]] ¶III(A)(4) of the Report.
[[6]] ¶III(A)(4) of the Report.
[[8]] ¶III(A)(1) of the Report.
[[9]] Rule 3(2)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (IT Rules).
[[10]] Chaitanya Rohilla v Union of India, Delhi High Court [WP(C) 15596/2023], available at https://dhcappl.nic.in/dhcorderportal/GetQROrder.do?ID=mmh//2023//100018561701785455000_27500_155962023.pdf.
[[11]] ¶III(A)(1) of the Report.
[[13]] MeitY’s advisory of 1 March 2024 (as amended on 15 March 2024) mandated such content labelling across all intermediary platforms: “Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled… that such information has been created generated or modified using the computer resource of the intermediary.”
[[14]] ¶III(A)(3)(a) of the Report.
[[15]] ¶III(A)(3)(a) of the Report.
[[16]] ¶III(A)(3)(b) of the Report.
[[17]] Ani Media Pvt Ltd v Open AI, Delhi High Court [CS(COMM) 1028/2024].
[[18]] ¶III(A)(2) of the Report.
[[19]] ¶II(A)(2) of the Report.
[[20]] Annexure of the Report.
[[21]] Conclusion of the Report.
[[22]] ¶II(B)(1) of the Report.
[[23]] ¶II(B)(2) of the Report.
[[27]] Please note various states of the USA have implemented their respective AI governance framework such as the Colorado AI Act of 2024, Illinois Supreme Court’s AI Policy of 2025 etc. There is not law at the federal
[[28]] Please note that United Kingdom has implemented various plans and strategies to govern AI. They are: the UK National AI Strategy (2021), the A pro-innovation approach to AI regulation (2023) and the AI Opportunities Action Plan (2025).
[[29]] The European Union has notified the EU Artificial Intelligence Act on 1 August 2024 and operationalised Chapter 1 and Chapter 2 on 2 February 2025.
[[30]] MeitY’s press release on “India’s AI Revolution - a Roadmap to Viksit Bharat” of 6 March 2025, available at https://pib.gov.in/PressReleseDetail.aspx?PRID=2108810®=3&lang=1.