Indias AI governance guidelines - how practical are these?
x
Regulatory challenges are too big to ignore if large private businesses are allowed to dominate the AI ecosystem. Image: iStock

India's AI governance guidelines promise ambitious vision, but uneven road ahead

Framework sets bold principles based on citizen-centricity and safety, but gaps in public consultation, regulation and infrastructure raise hard questions


Click the Play button to hear this message in audio format

The Centre released the "AI Governance Guidelines" just ahead of the India AI Impact Summit, signalling its intent to shape an AI ecosystem that is inclusive, safe, trusted, and innovation-friendly — one that lives up to the ambition of "AI for all."

The guidelines build on the momentum of the IndiaAI Mission, launched in March 2024 with an outlay of Rs 10,372 crore. That mission set out to break the concentration of AI capabilities — compute, data, and models — in the hands of a few, and to spread its benefits more widely across society. The broader goals: cement India's global standing in AI, reduce technological dependence, and ensure development stays on an ethical footing.

Getting governance right matters more than ever. AI is not a distant disruption — it is already reshaping economic activity, redefining work, and transforming workplaces in India and beyond. The question is not whether this transformation happens, but who benefits from it and on whose terms.

What the governance guidelines say

The guidelines are in line with the principles and recommendations of a special committee set up in July 2025 by the Ministry of Electronics and Information Technology (MeitY). This committee adopted the four sutras, or principles, laid down by an earlier FREE-AI Committee of the Reserve Bank of India (RBI).

The AI governance framework has four parts.

Also Read: India’s focus shifting from AI adoption to production: IIT Prof Ramakrishnan

The first part lays down seven sutras to ensure cross-sectoral applicability and technology neutrality, the second part has the key issues and recommendations to resolve those, the third presents an action plan, and the fourth gives practical guidelines for industry actors and regulators to follow to achieve the objectives.

1) Part I lists seven sutras: trust is the foundation, people first, innovation over restraint, fairness and equity, accountability, understandable by design and safety, resilience and sustainability.

2) Part II lists key issues: infrastructure, capacity building, policy and regulation, risk mitigation, accountability and institutions. It recommends setting up new institutions like the AI Governance Group (AIGG) to coordinate overall policy development, a Technology & Policy Expert Committee (TPEC) to provide expert inputs to it, an IndiaAI Safety Institute for research, to set standards and collaborate with international bodies. It also recommends integrating AI with Digital Public Infrastructure (DPIs) like Aadhaar and UPI, reviewing the legal framework, building the capacity of law enforcement, strengthening transparency and accountability through audits and self-certification, among others.

Also Read: India joins US-led Pax Silica to strengthen critical minerals and AI supply chains

3) Part III action plan includes short, medium, and long-term goals. These include developing India-specific AI risk assessments, regulatory gap analyses and law amendments to address those, the provision of regulatory sandboxes, publish common standards, and continuous reviewing and monitoring of the governance framework.

4) Part IV has practical guidelines for the industry that are meant to support innovation and adoption while ensuring that risks are addressed in a proportionate and context-appropriate manner. These include protections for women, children, and vulnerable groups.

Comparison with global frameworks

A comparative analysis of the AI governance norms by a multinational law firm, Squire Patton Boggs, says the Indian AI governance model “aims to demonstrate that open, interoperable platforms can be used to deliver solutions that can be adopted widely”.

What sets it apart from the rest of the world’s approach is “a combination of normative leadership (eg. guiding principles, safety norms) and practical infrastructure (DPI, AI Mission, GPUs, and AI Kosh datasets)”, it says.

Also Read: Top things that global leaders said at India AI Impact Summit 2026

About other AI governance models, it makes a few broad points.

The European Union’s single, horizontal rule classifies systems according to risks posed to the end user, forbids certain specific actions, and provides for strict documentation, compliance, and penalty regimes for high-risk AI.

The US doesn’t have a federal-level AI legislation, except for concerns relating to AI affecting national security, but several states do. Those laws focus on bias, discrimination, and civil rights in hiring and employment and profiling. Meanwhile, US President Donald Trump issued an executive order prohibiting other states from passing further AI-focused laws. (His executive order of December 12, 2025 is seen as a push back against “onerous” state rules and a win for tech giants who have called for a federal-level AI legislation.)

Also Read: India must take a sectoral approach to AI regulation: AI scientist

In the Asia-Pacific region, many countries aim to balance AI-driven innovation with safeguards against potential abuse. South Korea’s align closest with the EU’s risk-based approach, China’s emphasises risk of social unrest and other national security threats posed by generative AI, Japan’s focus is on allowing free development but with a Cabinet-level office to monitor AI deployment and usage, Asia-Pacific Economic Cooperation (APEC) prioritises AI infrastructure development in the region over restrictions on AI, and Australia’s moves away from the EU-style approach to emphasise regulating AI under existing laws with a new regulatory body, the National AI Safety Institute.

India 'ín the middle'

Ashish Tulsankar, a Mumbai-based AI expert and entrepreneur, puts India “in the middle” of the global frameworks. It uses “broad principles, leans on current laws," he says, adding that "regulators add some new committees/centres” in a way that says India wants AI growth with guardrails.

As for other AI governance models, he says, the EU has “much stricter and very detailed” guidelines with a formal AI Act with strict rules and fines for high‑risk AI systems, while the US is “more about guidelines and standards than one big law” with frameworks and sector‑specific rules and a push to agencies and vendors to behave better.

What happens in practice

While the direction of India’s AI governance guidelines is sound, how these shape the ground reality will be known in a few years. Meanwhile, a few discordant notes need to be addressed.

One is the sutra of “people at the centre” or “people first” to “strengthen human agency and reflect societal values”.

However, none of the preparatory works preceded public debate or consultations — from the RBI’s seven sutras published last August to the Office of Principal Scientific Advisor to the Government of India’s AI guidelines of January 23, 2026, based on which the AI governance guidelines were issued on February 15, 2026. These reports didn’t make the headlines either. The aim or the contours of the AI summit and its declaration came to public attention when they finally unfolded.

Also Read: All you need to know about Sarvam AI, the AI Summit ‘rockstar’

Two, the existing laws, regulatory mechanism and infrastructures that form the backbone of AI governance guidelines (as the guidelines refer to) have raised uncomfortable questions about the citizen-centric orientation.

For example, the Digital Personal Data Protection Act (DPDP) of 2023, which came into operation only in November 2025, has been challenged in the Supreme Court for diluting citizens’ data privacy and their rights to information while expanding the government’s surveillance power over them. A three-member bench, which heard several writs just as the summit was about to start, referred the matter to a larger bench a day after the AI governance guidelines were issued.

Concern among citizens

Similar challenges to the IT Rules of 2021 have been pending before the apex court, for being used to suppress citizens’ dissenting voices, for several years.

Another amendment to the IT Rules of 2021, issued days before the summit began and called IT (Intermediary Guidelines and Digital Media Ethics Code) Rule 2026 for AI-generated and deepfake, has caused concerns among citizens and social media giant Meta, which operates Facebook, Instagram, and WhatsApp. It reduces the takedown time for social media content, which the Centre deems objectionable, from the earlier 24-36 hours to three hours.

Also read | AI three-hour takedown rule: When speed becomes the censor

On the sidelines of the summit, Meta’s Rob Sherman warned against its potential misuse and the operational difficulties it entailed for his social media platform in terms of verifying the validity of the Centre’s objections.

Utility of data centres

Three, the Prime Minister’s open invitation to global AI players to open their data centres for data storage in India sparked debates over the utility of such data centres for multiple reasons.

One is the potential weaponisation of access to the technology in case of conflicts, if India continues to rely on foreign firms without its own foundational models. The other is the huge burden on Indian energy, water resources, and the attendant adverse environmental impact.

Also Read: PM Modi calls for human-centric AI, cautions against 'directionless' use

India’s dependence on coal for electricity is 72 per cent; Indian cities are the most (air) polluted in the world, with 23 of the top 25 in the 2026 AQI ranking.

Regulatory challenges

Fourth, regulatory challenges are too big to ignore if large private businesses are allowed to dominate the AI ecosystem. The IndiGo fiasco in December 2025 is a case in point. The airline, commanding 65 per cent air passenger share, chose to ground over 5,000 flights when forced to abide by the safety norms (Flight Duty Time Limitation or FDTL) first notified in January 2024.

The early trend shows that some of the largest corporations in India are investing big in AI, raising fears of IndiGo-like monopolies.

Fifth, can India really shape the AI governance in India or abroad without becoming a big AI player in its own right? The Economist, in a recent article, said despite claiming to be emerging as an AI superpower along with the US and China, India was “a bystander in the two-horse race to develop frontier models”. It particularly referred to its poor chip manufacturing (“its success in the coming years would at the most mean India being able to produce some lower-grade chips, while continuing to import whizzier ones”) and computational abilities to drive home the point.

'No single strict AI law'

When asked, Tulsankar told The Federal, “I’m mildly hopeful overall, but I expect uneven results.”

He elaborated that “on the plus side, India has shown it can roll out large digital systems at scale, these guidelines connect to real laws and regulators, and serious companies will likely treat them as a real design spec, not just a brochure” and “on the minus side, there is no single strict AI law yet, a lot is voluntary, and many smaller organisations simply don’t have the people or money to do proper AI governance until something forces them to”.

Also Read: AI Impact Summit: Modi holds bilateral meets with 7 PMs, presidents, 2 CEOs

In short, Tulsankar sees the direction to be sound, the structures sensible, but cautions that the “real‑world impact will likely be strong in some pockets and weak in others, at least for the next few years”.
Next Story