Amid the celebratory atmosphere surrounding India’s AI ambitions at the India AI Impact Summit 2026, a sobering counter-narrative emerged from legal scholars, civil liberties advocates, and technology policy experts: India’s legislative and regulatory frameworks remain dangerously unprepared for artificial intelligence’s societal implications, creating governance gaps that threaten both innovation and rights protection.
The fundamental challenge, experts argue, is temporal mismatch. Technology evolves exponentially—AI capabilities doubling every 6-12 months—while legal systems operate linearly, with legislation requiring years to draft, debate, and implement. This creates situations where 2000-era laws govern 2026 technologies, producing regulatory frameworks systematically inadequate for challenges they confront.
The IT Act 2000: A Quarter-Century Obsolete
India’s primary digital governance legislation, the Information Technology Act of 2000, predates social media, smartphones, cloud computing, and certainly generative AI. The Act addresses computer-related crimes, electronic signatures, and data protection—concerns relevant to early internet era but insufficient for AI age.
“The IT Act doesn’t even mention algorithms, let alone algorithmic accountability,” noted Dr. Pawan Duggal, India’s preeminent cyberlaw expert speaking at a summit session on AI governance challenges. “We are applying legal frameworks from the dial-up modem era to regulate technologies that can generate photorealistic deepfakes and manipulate elections.”
Specific inadequacies include:
Algorithmic Accountability: No provision establishes liability when AI systems cause harm through bias, errors, or malfunction. If an AI-powered loan denial system discriminates against protected classes, existing law provides no clear cause of action or remedy.
Synthetic Content: While 2026 rules mandate deepfake takedowns within three hours, the IT Act’s original framework lacks provisions specifically addressing AI-generated content, creating legal ambiguity about liability, authentication requirements, and permitted uses.
Data Extractivism: The Act’s data protection provisions (amended multiple times) focus on privacy but inadequately address AI’s voracious data appetite and questions of value distribution when personal data trains commercial AI models.
Autonomous Systems: As AI systems gain increasing autonomy—making decisions without human oversight—liability frameworks designed for human decision-makers become incoherent. Who bears responsibility when algorithmic trading system causes market disruption? The programmer, the company deploying it, or the AI itself?
The Digital Personal Data Protection Act 2023
The Digital Personal Data Protection Act of 2023 represented significant progress, establishing consent frameworks, data localization requirements, and individual rights regarding personal information. However, the legislation was drafted before generative AI’s emergence and consequently fails to address AI-specific data challenges.
AI training on personal data often occurs without meaningful consent—users posting content online rarely anticipate it training commercial AI models. The Act’s consent provisions, designed for traditional data processing, don’t clearly apply to AI training scenarios.
Additionally, the Act exempts government agencies from several provisions, creating accountability gaps for government AI deployments in policing, welfare administration, and surveillance—precisely contexts where algorithmic power poses greatest rights risks.
The Three-Hour Takedown Rule
India’s 2026 IT Rules require platforms to remove deepfakes and synthetic media within three hours of notification. This mandate, announced shortly before the summit, represents government’s attempt to address synthetic content harms through administrative regulation rather than awaiting legislative action.
Supporters argue the rule provides necessary urgency given deepfakes’ demonstrated capacity to spread misinformation, manipulate elections, and destroy reputations. “Three hours from viral deepfake to removal can limit damage significantly,” argued Rajeev Chandrasekhar, former Minister of State for Electronics and IT.
However, critics contend the rule creates over-moderation incentives. Platforms facing tight deadlines and potential penalties will remove content conservatively, taking down legitimate material to avoid risk—particularly challenging when deepfake detection itself remains imperfect, with false positive rates exceeding 5% for sophisticated systems.
“Three-hour takedown means platforms will implement automated removal with minimal review,” cautioned Apar Gupta of Internet Freedom Foundation. “This threatens satire, political commentary, and artistic expression that employs synthetic media legitimately.”
The rule also lacks clarity on jurisdictional scope. Must platforms remove content globally when notified by Indian authorities, or only for Indian users? Can Indian citizens access synthetic content hosted abroad? How do conflicting jurisdictions’ requirements get resolved?
Facial Recognition and Surveillance
AI-powered facial recognition systems have proliferated across Indian cities for law enforcement, despite absence of specific legislation governing their deployment, accuracy standards, or use limitations. Delhi police alone operate over 140,000 CCTV cameras feeding facial recognition algorithms.
No law requires accuracy testing before deployment, transparency about algorithm performance across demographic groups, or independent audit of systems’ compliance with constitutional rights. Research demonstrates facial recognition systems perform worse on darker skin tones and women—precisely populations most vulnerable to state power in Indian context.
“We have deployed facial recognition systems affecting millions without debate about whether such deployment is appropriate, let alone how it should be regulated,” observed Vidushi Marda, senior researcher at Article 19 focused on algorithmic accountability. “This represents governance failure of staggering proportions.”
Civil liberties advocates argue facial recognition systems create chilling effects on political assembly, enable discriminatory policing, and establish surveillance infrastructure potentially abused by authoritarian governments—concerns particularly salient given global democratic backsliding trends.
Criminal Justice AI
Several Indian states are piloting or deploying AI systems for criminal justice applications including predictive policing (forecasting where crimes will occur), recidivism prediction (assessing reoffending likelihood for bail and parole decisions), and case outcome forecasting (estimating trial outcomes to prioritize prosecutions).
These applications pose acute fairness concerns. Predictive policing systems trained on historical crime data encode existing policing biases—if police disproportionately patrol minority neighborhoods, crime data shows higher offending rates in those areas, causing algorithms to recommend continued over-policing, creating self-fulfilling prophecies.
Recidivism prediction systems used in the United States have demonstrated systematic racial bias, assigning higher risk scores to Black defendants than white defendants with identical criminal histories. Indian caste dynamics create analogous risks if algorithms encode historical discrimination into supposedly objective risk assessments.
“Algorithmic criminal justice is particularly dangerous because it legitimizes bias through mathematical authority,” warned lawyer Vrinda Bhandari, who has litigated algorithmic accountability cases. “Courts defer to ‘scientific’ risk assessments without understanding they are encoding societal prejudices.”
No Indian legislation specifically regulates criminal justice AI, requiring accuracy testing, bias audits, or establishing defendants’ rights to understand and challenge algorithmic assessments affecting their liberty.
Healthcare AI
AI medical devices and diagnostic systems require regulatory approval from Central Drugs Standard Control Organisation (CDSCO). However, existing frameworks were designed for physical medical devices with fixed performance characteristics, not software systems that improve through machine learning and may degrade through distribution shift as patient populations change.
Regulatory ambiguity affects innovation timing. Overly stringent requirements delay beneficial technologies’ deployment. Insufficient oversight allows dangerous systems to reach patients before safety is established. Getting this balance right requires AI-specific regulatory expertise CDSCO is still developing.
Additionally, liability frameworks remain unclear when AI misdiagnosis harms patients. Is the algorithm developer liable? The hospital deploying it? The physician who accepted AI recommendation? Existing medical malpractice law presumes human decision-makers; allocating responsibility for algorithmic errors requires doctrinal innovation Indian courts have not yet undertaken.
Employment Displacement
While summit discussions emphasized AI augmenting rather than replacing human workers, substantial displacement risk remains real for routine cognitive tasks—call centers, data entry, basic accounting, legal document review—employing millions of Indians.
India lacks social safety nets cushioning displacement. Unemployment insurance covers only 2% of workforce. Job retraining programs remain limited in scale and effectiveness. Universal Basic Income proposals remain experimental pilots rather than national policy.
“We are deploying labor-replacing AI without social infrastructure to support displaced workers,” argued Jean Drèze, development economist at Ranchi University. “This is recipe for social unrest and humanitarian crisis.”
No legislation requires employers to assess AI’s employment impact before deployment, fund worker retraining, or provide transition support. Labor laws focus on factory-era concerns—working hours, safety conditions—rather than technological displacement.
The Cybersecurity Law Gap
Successive Indian governments have promised comprehensive cybersecurity legislation creating incident reporting requirements, establishing security standards, and defining liability for breaches. Such legislation remains perpetually delayed, currently not expected before 2028.
This gap proves particularly problematic as AI systems become attack vectors and targets. Adversarial attacks manipulating AI behavior, data poisoning corrupting training sets, and model theft extracting proprietary algorithms represent emerging threat vectors existing law inadequately addresses.
Financial services AI, healthcare diagnostic systems, and autonomous vehicles require cybersecurity standards appropriate to risks they pose. Without legislative framework establishing such standards, security remains voluntary rather than mandated, creating predictable underinvestment in protective measures.
Constitutional Doctrine
Deeper than specific legislative gaps lies constitutional doctrine question: how do rights established for human decision-makers apply when algorithms govern?
The Constitution guarantees equal protection, due process, and various fundamental rights. These guarantees presume transparent government action subject to judicial review. Algorithmic governance challenges these assumptions fundamentally.
Machine learning systems often function as “black boxes” where even developers cannot fully explain individual decisions. How can courts review algorithmic decisions for constitutional compliance if decisions are unexplainable? Does due process require algorithmic transparency, and if so, how is that operationalized when transparency could compromise proprietary systems or enable adversarial manipulation?
Indian courts have begun grappling with these questions but lack coherent doctrinal framework. Different High Courts have issued contradictory rulings on algorithmic transparency requirements, creating legal uncertainty.
Regulation Risks Stifling Innovation
Technology industry representatives argue excessive regulation premature to AI’s development risks strangling innovation before realizing technology’s benefits. Better to deploy quickly, learn from experience, and regulate based on observed harms rather than speculative concerns.
“Every new technology faces calls for preemptive regulation,” argued an industry lobby representative speaking at summit side events. “Overzealous regulation drove India’s tech industry abroad in the past. We cannot repeat that mistake with AI.”
This perspective emphasizes “regulatory sandboxes” allowing controlled experimentation rather than comprehensive legislation. Such approaches enable learning while limiting risks through narrow deployment and close monitoring.
However, critics counter that “move fast and break things” proves acceptable for consumer applications but dangerous when AI governs criminal justice, healthcare, and social welfare. “We don’t experiment on vulnerable populations then regulate after harm occurs,” Marda argued. “That is immoral and unconstitutional.”
International Models
India can learn from international AI governance efforts, both successes and failures. The European Union’s AI Act creates risk-based regulatory framework with strict requirements for high-risk applications—precisely what experts argue India needs.
However, EU legislation took four years to develop and runs 458 pages—timeline and complexity India’s legislative process might struggle to replicate. Additionally, EU framework reflects European priorities and values not necessarily aligned with Indian developmental context.
China’s AI governance emphasizes state control and social stability, with algorithmic recommendation systems required to “adhere to mainstream values” and facial recognition tightly regulated. This model proves unsuitable for democratic society valuing pluralism and individual rights.
The United States lacks comprehensive federal AI legislation, instead employing sector-specific regulations and voluntary industry commitments—approach criticized as insufficiently protective but praised for preserving innovation.
India’s optimal path likely involves hybridization: learning from others’ successes while adapting to Indian constitutional values, developmental priorities, and governance capacity.
Path Forward:
Legal scholars and policy experts at the summit converged on several reform priorities:
- Comprehensive AI Legislation: Establish framework defining algorithmic accountability, requiring transparency for high-risk applications, mandating impact assessments, and creating individual rights regarding algorithmic decisions.
- Dedicated Regulatory Capacity: Create AI regulatory authority with technical expertise to audit systems, enforce standards, and issue guidance—capabilities existing agencies lack.
- Algorithmic Impact Assessments: Require developers and deployers of high-risk AI systems to document training data, test performance across demographic groups, identify failure modes, and implement mitigation measures before deployment.
- Judicial Capacity Building: Train judges in algorithmic decision-making implications, enabling informed constitutional review of AI governance systems.
- Public Participation: Ensure affected communities participate in AI governance decisions rather than technocratic elite imposing systems without consultation.
The Urgency Question
Whether governance gaps constitute crisis requiring immediate intervention or manageable growing pains depends partly on AI’s deployment pace and partly on risk tolerance.
If AI deployment remains gradual, learning-by-doing approach allowing regulation to evolve with technology proves defensible. However, if deployment accelerates—as summit investment commitments suggest—governance gaps widen dangerously.
“We are at inflection point,” Duggal concluded. “The decisions India makes in next 18-24 months about AI governance will shape outcomes for decades. Get it wrong, and we either stifle innovation or enable algorithmic authoritarianism. Get it right, and we demonstrate democratic societies can harness AI while protecting rights. The world is watching.”
As the India AI Impact Summit 2026 concluded, celebrating technological achievements and investment commitments, this governance warning remained sobering counterpoint. India’s AI ambitions require not merely computational infrastructure and capital but institutional infrastructure and legal framework ensuring technology serves democratic values and constitutional commitments. Whether such frameworks emerge before governance gaps produce crisis remains the critical unanswered question.
– B P Padala



