Majmudar & Partners

Majmudar & Partners

News and developments

Artificial Intelligence and Big Data regulation: Analyzing India’s emerging regulatory framework

The Telecom Regulatory Authority of India (the “TRAI”) has recently released recommendations on “Leveraging Artificial Intelligence and Big Data in the Telecommunication Sector” and has suggested that the Indian government must adopt a regulatory framework for AI that applies across sectors.   In this update, we analyze the significance of the proposed framework and its implications on India’s position on AI.

Background

The impact of artificial intelligence (“AI”) can be seen today across the globe and in various sectors, and has the potential to profoundly influence industries across the spectrum, including finance, healthcare, customer services, etc.  The functioning of AI is premised on processing large volumes of data using advanced algorithms to recognize patterns, learn from experiences and make predictions.

The development and deployment of AI technologies is susceptible to various risks, such as perpetuating and even amplifying existing biases, which can lead to unfair or discriminatory outcomes when making decisions or predictions.  Additionally, the creation of inaccurate models can generate misleading insights, impacting critical areas such as healthcare diagnoses or financial forecasts.  Therefore, the use of AI has sparked discussions on ethics and biases, emphasizing the need for transparent and accountable AI development with policymakers, businesses and technologists playing a crucial role in shaping AI’s path to ensure it aligns with human values.

Global perspectives

Jurisdictions across the world have begun drafting policies and regulations and are taking up positions in the AI space.  Governments appear to be pivoting towards a policy approach that harnesses and regulates AI, with varying levels of priority.

People’s Republic of China: China will soon become the first country to have an operational AI law, with the Chinese AI regulations (the “PRC Regulations”) scheduled to become operative from August 15, 2023.  The provisional draft of the PRC Regulations was published jointly by seven (7) different regulatory bodies.  Under the PRC Regulations, the most prominent directive is with respect to the content.  AI service providers are prohibited from generating any content that is violent, obscene or against national interest, and AI chatbots are prohibited from generating false and harmful information.  AI service providers must also select training data in a way that avoids discrimination.  They must also report any users using AI to produce illegal content and restrict minors from becoming addicted to such services.

European Union: As with the General Data Protection Regulation, the European Union (the “EU”) has taken a regulation heavy approach with the EU Artificial Intelligence Act (the “EU Act”) proposing a comprehensive regulatory framework which aims to ensure the responsible development and deployment of AI within the EU.  The EU Act proposes a risk-based approach categorizing AI systems as low, high, or very high based on potential harm and level of autonomy.  High-risk AI systems, such as those in healthcare and transportation, face strict requirements on data transparency, human oversight and robustness.  The EU Act also bans certain AI practices that pose an unacceptable risk to an individual’s rights, such as social credit scoring.  It emphasizes transparency by mandating clear labelling of AI-generated content, safeguards against biased algorithms, and establishes a European Artificial Intelligence Board to oversee the implementation.  Overall, the focus remains on ethical implications, aimed at fostering trust and accountability in AI technologies across the EU.

United States of America: The US Federal Trade Commission has published a set of five (5) principles which must be followed by companies when using AI for consumer protection. Companies are required to be: (i) be transparent with consumers about their interaction with AI tools; (ii) clearly explain decisions that result from AI; (iii) ensure that decisions are fair; (iv) ensure that the data and models being used are robust and empirically sound; and (v) hold themselves accountable for compliance, ethics, fairness and non-discrimination.  An AI Bill of Rights has also been proposed, which is a non-binding roadmap for the responsible use of AI.

United Kingdom: The UK has taken a “pro-innovation” approach to AI regulation.  The UK’s proposed AI framework does not seek to assign rules or risk levels to entire sectors or technologies, but instead regulates them on the basis of outcomes.  Instead of hard-set rules, principles such as safety, security, robustness, fairness, accountability, redress, etc., will govern the use of AI, and individual sector regulators with domain-specific expertise will customize the implementation of the foregoing principles to specific contexts.

Other jurisdictions: Saudi Arabia has taken a more relaxed stance on regulation and created a Saudi Data & AI Authority to prepare and execute the national data and AI agenda.  Another body known as the National Center for Artificial Intelligence has been set up to promote research, development and advancement in AI.  The approach focuses on innovation rather than regulation.  Australia has proposed a set of eight (8) “AI Ethics Principles” which organizations may voluntarily consider.  Singapore is also in the process of creating a voluntary model framework which provides implementable guidance to private sector organizations to address key ethical and governance issues.  Further, a national AI strategy has been unveiled by Canada, with an emphasis on talent development, research, and industry cooperation.  Japan's Society 5.0 initiative seeks to incorporate AI into every element to create a human-centred society.

Therefore, differing approaches are being taken towards AI and its regulation, with some countries providing a high degree of freedom for innovation and others seeking to actively regulate AI.

India is also emerging as a key player in the AI regulation space, and the path taken by India must be closely monitored due to the impact it may have on the AI space.

The AI landscape in India

India seems to be taking a mixed approach to AI and is trying to balance between innovation and regulation.  On the innovation agenda, India has taken steps to encourage the responsible and inclusive development of AI for economic progress.  The National Strategy on AI document of 2018, released by India’s NITI Aayog, covers training, research and development, centres of excellence, data accessibility, and high computing infrastructure.  India's objective for social transformation through responsible AI was the focus of the Responsible AI for Social Empowerment (RAISE) event in 2020, which attracted participants from academia, business, and governments worldwide.  India has partnered with other developed nations like the United States of America, the United Kingdom, the European Union, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, Republic of Korea, and Singapore as founding members of the Global Partnership on Artificial Intelligence (the “GPAI”).  GPAI is a global initiative supporting ethical AI development based on inclusivity, diversity, creativity, and economic prosperity. Additionally, the Indian government has also established the "National Artificial Intelligence Portal," a national hub for AI-based projects.

On the regulatory front, the TRAI has released recommendations on July 20, 2023 (the “TRAI Recommendations”) that cover a diverse range of topics related to leveraging big data and artificial intelligence in the telecommunication sector.  AI, its definition, new threats, moral standards, and the necessity for regulation have been discussed, along with limitations on AI adoption and the requirement for a data governance structure.  The TRAI Recommendations also examine global perspectives and advice on the regulatory approach for India.  The focus of the TRAI Recommendations extends beyond the telecommunication industry, and the TRAI acknowledges that AI will have an impact across many sectors, including healthcare, banking, transportation, education, and agriculture.  Due to the widespread influence of AI across industries, including telecommunication, a standard framework for all industries has been suggested (a more detailed analysis of the recommendations is affixed as an Annexure).

This strategy aligns with the dynamic nature of AI technology and its potential effects on people, businesses, and society.  Adopting such a uniform framework will promote ethical AI practices and will guarantee uniform standards across many industries, while promoting innovation and advancement in the AI field.  The government's readiness to implement this idea and coordinate initiatives across many businesses to address AI's difficulties and prospects successfully will, however, determine the proposal's ultimate success.

One of the key recommendations made by the TRAI is to set up an Artificial Intelligence and Data Authority of India, (more on this in the Annexure), which will oversee the creation of responsible AI and control its applications in India.  Given the dynamic nature of AI and its imminent exponential growth, it will be paramount that the regulatory authority is agile, responsive and fast moving.  In this regard, while the recommendation to set up a dedicated body is well-intentioned, such a body will need to show a high level of efficiency in comparison to some of the other regulators in India.  India, with its young average population, is adding millions to its workforce every year, and therefore, the Indian government's AI policy may need to incorporate certain socialist principles to ensure job safety, which may lead to interesting differences in the policy approach of the Indian government towards AI vis- that of other developed nations.

To what extent the TRAI Recommendations will be implemented, and the ultimate approach taken by India remains to be seen.  However, the balanced path that has been proposed augurs well for the future of the AI industry in India.

Authors: Yashaswee Sarkhel, Anshul Isaac and Mohit Bothra.