The Imperative for Dynamic Governance of AI in India

This blog explores the dynamic and sector-specific regulatory frameworks of AI, calling for increased education for both the public and policymakers, and the need for collaborative efforts to harness AI's advantages while mitigating its negative impacts.

Artificial Intelligence (AI) is a new technological paradigm poised to reset almost every domain of our lives, whether it is health, education, agriculture, or transport. Such has been its integration into our daily lives, that, today in our thoughts, actions and experiences, there is no distinction between what is ‘technological’ and what is otherwise, ‘natural’. Like any other revolution, this AI revolution promises economic growth, potential positive outcomes for society and offers other benefits alongside its own array of challenges, and risks, calling for intervention by concerned authorities to ensure its benefits far outweigh its negative impacts. However, the global interconnectedness of technological development, its versatile and intricate nature, and lack of knowledge about its implications make it difficult to govern its development and deployment. 

India, like most countries, finds itself at crossroad with AI and is manoeuvring between AI’s promises and perils. While there is some discussion about the possible negative impacts of AI such as job losses, misinformation, data bias, etc., our policymakers, including both industry leaders and government representatives, espouse a positive view of the technology. Much of the prevailing narrative in India is about the harnessing the technology’s economic potential. Meanwhile, it continues to evolve without any governance framework for checks and balances, and uncertainty looms over its future implications across social, environmental and economic dimensions. As also witnessed in the recent election season, the absence of institutional capacity to withstanding the risks posed by AI is another significant challenge.

Nascent Technology, Perception Building and Policy-making

Initiatives by countries and global dialogues have established that technology, despite its many potentials, cannot be left unregulated. However, there are several challenges in achieving this. Here we take two challenges that we consider the most important in governance of AI. While the first relates to the nature of technology itself, the second is about how technology is being perceived.  

First, ongoing debates about the stage of AI development, is delaying the process of establishing adequate oversight and regulation. There are assertions that AI is in its infancy, and there exists a perception that putting guardrails at such an early stage would hamper the meaningful evolution of the technology. However, the negative usage of technology, such as deepfakes or audio manipulation, sometimes leads to knee jerk reactions by government authorities. For instance, recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory asking Big Techs to seek approval before deploying any generative AI tools or models. However, after criticism from the industry, it was later withdrawn, mandating only labelling of AI generated content. Such incidents underscore the need for a more proactive response by the government and an overarching framework as a reference point to regulate AI.

Secondly, lawmakers worldwide are struggling to understand this complex technology, and India is no exception. According to a report by the Institute for Governance, Policies and Politics, New Delhi, ‘What Indian Parliamentarians Think of AI?’ released in March 2024, only 1 out of 3 parliamentarians are aware about the role of data in AI models. This suggests that only few parliamentarians are in position to deliberate on crucial issues relating to AI. This is particularly surprising given the recent legislative efforts, such as the Personal Digital Data Protection Act, 2023, which was passed after numerous iterations. 89 percent parliamentarians lack a fundamental understanding of how AI systems function. This lack of foundational understanding also contributes to negative perceptions of its societal impact. As much as 59 percent of them consider AI will have a negative impact on society. Most of their fear and anxiety about this technology is about potential job losses and the emergence of super intelligent systems.

The Middle Ground

Such contradictory attitudes, that on one hand focuses on economic potential of the technology and on the other considers the technology to be a threat, poses a dilemma as to what approach we adopt in regulating the technology? Do we even regulate the technology or wait for it to fully develop before we put guardrails? 

Before deliberating on solutions, it is important to understand that AI is a probabilistic, rather than, a deterministic technology. Hence, it is important to accept the fact that it will give probable results and achieving human-level complexity remains a distant goal.

Here are some probable solutions for the situation at hand: First, policy formulation should prioritise dynamism over perfection, enabling adaptation to emerging circumstances. Waiting for the ‘right time’, risks falling behind, and once again we may find ourselves trailing as in the case of social media governance. Economic growth could be reclaimed, but the harms to the society cannot be undone. 

Second, create sector-specific solutions. Each sector has its own unique sets of challenges and solutions. Adopting a one-size fits all approach would not be the right approach to govern AI. What can be done at the moment, is ensure that AI is be integrated in sectors such as healthcare, education, etc., in a manner that does not necessarily/completely replace the current practices, but highlights the current limitations and uses technology to overcome them. Technology should be assistive and complementary. Establishing an overarching governance framework for AI is crucial. However, this framework should leave the scope for various sector to define their rules, and policies to regulate their use, since what is required for financial sector is different from healthcare sector.

Lastly, there is urgent need to educate the public, especially policymakers, about the disruptive power of new technologies’ while emphasising their potential to serve society. A negative perception of technology, and lack of adequate understanding often prompts a reactionary approach from the government. Further, users should also be made aware to how technology should be used positively, and not to deepen the existing societal divides. IGPP is doing its bit about educating our future representatives through engaging them in a virtual session, ‘AI & Your Electoral Fortune’ and making them aware of how they can use AI in their campaigns effectively while warding off its threats. 

However, more needs to be done, and all stakeholders should be brought to the table to ensure that a balanced regulatory framework can be put in place, and India can continue to take the lead as it has taken through initiatives such as, the IndiaAI Mission.

Manish Tiwari is the Director of Institute for Governance, Policies & Politics (IGPP), a think tank that conducts in-depth research and analysis on the issues of public policy.

Share this:

Related: