In a powerful assertion of technological sovereignty, India unveiled three “Made in India” artificial intelligence models at the India AI Impact Summit 2026, demonstrating that homegrown systems optimized for Indic languages and low-resource environments can match or surpass global tech giants on performance benchmarks.
The launches of Sarvam AI’s 105-billion parameter model, BharatGen’s Param2 17B, and Gnani.ai’s Vachana text-to-speech system collectively signal India’s emergence as a credible AI development hub independent of Silicon Valley’s dominant platforms.
Sarvam 105B: India’s Frontier Model
Sarvam AI, a Bengaluru-based startup, emerged as the summit’s technological showstopper with its 105-billion parameter large language model employing a Mixture-of-Experts (MoE) architecture. The model reportedly outperforms Google’s Gemini Flash and DeepSeek R1 on multiple benchmarks while operating with dramatically lower inference costs.
The MoE architecture represents sophisticated engineering: rather than activating all 105 billion parameters for every task, the model intelligently routes queries to specialized “expert” sub-networks, typically activating only 10-15% of total parameters. This approach drastically reduces computational requirements during inference—the phase when users actually interact with the model.
“Traditional dense models are like using a sledgehammer for every task,” explained Vivek Raghavan, co-founder of Sarvam AI. “Our MoE architecture is like having specialized tools—you use the right one for each job. This means we can match GPT-4 class performance at one-tenth the inference cost.”
Sarvam 105B features a 128,000-token context window—the amount of text the model can process simultaneously—positioning it among the longest context windows globally. This capability is particularly valuable for processing lengthy documents in legal, medical, and administrative contexts common in Indian government and enterprise settings.
Perhaps most significantly, Sarvam 105B is optimized for 10 Indian languages including Hindi, Tamil, Telugu, Bengali, and Marathi. Unlike global models that treat non-English languages as afterthoughts, Sarvam embedded Indic language training into its core architecture from the beginning.
“English speakers are less than 15% of humanity,” Raghavan noted. “AI optimized only for English is AI optimized for a minority. We built for the majority.”
BharatGen Param2 17B
BharatGen, a consortium led by IIT Bombay with participation from multiple Indian Institutes of Technology, took a different approach with its Param2 17B model. While smaller than Sarvam 105B in parameter count, Param2 focuses on linguistic accuracy across all 22 official Indian languages recognized in the Constitution.
The model addresses what researchers call the “language gap”—the systematic bias in AI systems toward English and a handful of Western European languages. Global AI benchmarks are predominantly English-centric, meaning models perform dramatically worse in languages like Konkani, Manipuri, or Santali.
“When Google Translate fails for Santali, that’s not a technical limitation—it’s a reflection of priorities,” argued Professor Pushpak Bhattacharyya, who leads the BharatGen initiative. “We prioritized what matters to India: ensuring every citizen can interact with AI in their mother tongue.”
BharatGen made a strategic decision to release Param2 17B as open source through Hugging Face, the popular AI model repository. This choice reflects a philosophical commitment to accessibility and transparency—characteristics the consortium argues should define AI development in the Global South.
“Open source is not charity. It’s strategic sovereignty,” Bhattacharyya explained. “When models are open, researchers everywhere can audit, improve, and adapt them. Closed models create dependency; open models create ecosystems.”
The open-source approach has already yielded dividends. Within 48 hours of release, developers had fine-tuned Param2 derivatives for specialized applications including legal document analysis in Hindi, agricultural advisory in Punjabi, and healthcare diagnostics in Bengali.
Gnani.ai Vachana
Gnani.ai‘s contribution, the Vachana text-to-speech system, addresses a different dimension of language access. Vachana can clone human voices across 12 Indian languages using only 3-5 seconds of audio—a capability with transformative implications for accessibility.
For India’s 40 million persons with visual impairments, high-quality text-to-speech in regional languages is essential for accessing digital content. Existing TTS systems produce robotic, unnatural voices particularly jarring in tonal languages like Tamil or Bengali. Vachana’s voice cloning creates natural, emotionally expressive speech that dramatically improves comprehension and user experience.
“A visually impaired Tamil speaker in Madurai doesn’t want to hear content read by a Hindi-accented robot,” said Ganesh Gopalan, CEO of Gnani.ai. “They want natural Tamil speech. That’s what Vachana delivers.”
The system is also optimized for low-bandwidth environments—critical for India where millions access internet through 2G and 3G connections. Vachana compresses voice synthesis to operate on connections as slow as 32 kbps, ensuring accessibility even in rural areas with poor connectivity.
Sarvam Kaze: Hardware Ambitions
Beyond software, Sarvam AI unveiled the Sarvam Kaze smart glasses—AI-powered wearables capable of real-time visual and auditory processing. The glasses demonstrate India’s ambitions in hardware-software integration, an area where Chinese and American companies currently dominate.
Kaze glasses employ on-device AI processing for privacy-sensitive applications like real-time language translation, scene description for visually impaired users, and augmented reality navigation. The glasses integrate with Sarvam’s language models to provide contextual information in the user’s preferred Indian language.
“Hardware is where Apple dominates, where Meta is investing billions,” Raghavan noted. “We cannot cede that territory. Kaze proves Indian companies can compete in integrated AI systems, not just software.”
Challenging the Monopoly
The strategic significance of India’s homegrown models extends beyond national pride. It represents a fundamental challenge to the emerging AI oligopoly where a handful of American companies—OpenAI, Google, Anthropic, Meta—control access to frontier capabilities.
This concentration creates multiple vulnerabilities. Commercial terms can change arbitrarily: OpenAI recently limited free API access, disrupting hundreds of developers. Political pressures can restrict access: the U.S. government has discussed limiting China’s access to American AI systems. Technical priorities can ignore local needs: GPT-4’s Hindi performance remains demonstrably inferior to its English capability.
“Every nation dependent on American AI models is one policy change away from being cut off,” warned Dr. Pawan Duggal, India’s prominent cyberlaw expert. “Sovereign AI capability is not optional—it’s existential.”
The Open Versus Closed Debate
India’s AI ecosystem is notably divided on the question of open versus closed models. Sarvam AI has not released its 105B model openly, citing commercial considerations and competitive dynamics. BharatGen’s fully open approach contrasts sharply.
“We respect Sarvam’s decision, but we believe openness serves India’s interests better,” Professor Bhattacharyya argued. “A thousand developers improving Param2 will yield more value than proprietary models accessible only to those who can pay.”
Supporters of Sarvam’s approach counter that sustainable AI businesses require revenue, and open-source models struggle to generate returns sufficient for continued development. “Good intentions don’t pay for GPUs,” Raghavan noted pragmatically.
Benchmarks and Reality Checks
While Indian models have demonstrated impressive capabilities, independent benchmarking suggests caution against overstated claims. Sarvam 105B’s reported performance exceeding Gemini Flash remains contested, with some researchers noting benchmark-specific optimization that may not generalize.
“Beating specific benchmarks is different from overall capability,” cautioned Dr. Anita Ramachandran of IIT Madras. “We should celebrate progress while maintaining scientific rigor about where Indian models truly stand.”
Nevertheless, the trajectory is undeniable. Eighteen months ago, no Indian AI model exceeded 10 billion parameters. Today, India fields a 105-billion parameter system. The gap to frontier models like GPT-4 or Claude 3 is narrowing rapidly.
The Infrastructure Foundation
None of these models would exist without India’s GPU infrastructure expansion (detailed in Report 2). Sarvam 105B required approximately 15,000 GPU-hours of training—economically prohibitive at commercial cloud rates but feasible with subsidized government compute.
“The IndiaAI Mission didn’t just provide GPUs—it provided possibility,” Raghavan acknowledged. “Without that infrastructure, Sarvam would still be training 1-billion parameter models.”
A Third Pole Emerges
For decades, AI development has been bipolar: American innovation or Chinese scale. India’s homegrown models establish a third pole characterized by linguistic diversity, low-resource optimization, and developmental focus.
“We are not trying to be America or China,” Professor Bhattacharyya concluded. “We are being India—and that means AI that serves 1.4 billion people across 22 languages, built with our engineers, trained on our infrastructure, and reflecting our priorities.”
As these models mature and proliferate, the global AI landscape is fundamentally reshaping. The question is no longer whether alternatives to Silicon Valley’s AI hegemony can exist—India has demonstrated they can. The question now is whether they can scale to truly compete.
– Kalyan S Maramaganti



