- By Alex David
- Wed, 08 Oct 2025 01:11 PM (IST)
- Source:JND
In a significant drive for responsible artificial intelligence, IndiaAI Mission has unveiled the addition of five new AI projects under its “Safe & Trusted AI” programme. The initiatives are designed to address important issues like deepfake identification, AI bias reduction and security testing. The mission, being implemented under the aegis of the Ministry of Electronics and Information Technology (MeitY), had received more than 400 applications from research institutions, startups and academia, across different regions in India.
IndiaAI Mission’s Vision for Safe AI
The IndiaAI Mission was aimed at building their AI ecosystem by combining ethics, safety and inclusivity in artificial intelligence. In December 2024 the mission released a second call for Expression of Interest (EoI) as part of the Safe & Trusted AI pillar. Five pioneering projects have just been chosen to be funded and mentored after a rigorous evaluation process.
IndiaAI Selects Five Projects: Full List and Focus Areas
The chosen projects cover urgent topics in AI, from deepfake detection to gender bias assessment and testing for AI model security.
Saakshya: Multi-Agent Deepfake Detection Framework
Saakshya, led by IIT Jodhpur and IIT Madras, aims to develop advanced retrieval augmented generation (RAG) techniques and trains multi-agent systems for manipulated media detection. The initiative will seek to develop resources for the identification of deepfakes in different formats, which is important for sound governance and accountability around AI-generated media.
AI Vishleshak: Audio-Visual and Signature Forgery Detection
Artificial Intelligence (AI) Vishleshak has been created by IIT Mandi in association with the Director of Forensic Services, Himachal Pradesh and will be used to identify forged audio, video and handwritten documents. The researchers said the tool is adversarial because it can be used in a wide variety of contexts, such as law enforcement and digital verification.
Real-Time Voice Deepfake Detection
IIT Kharagpur is heading this project that aims to create tools for real-time voice spoofing and impersonation detection. It could also make it easier to fend off voice phishing, where con artists dupe people into handing over personal information, and impersonation scams as well as bolster the security of voice-based AI programs.
Evaluating Gender Bias in Agricultural AI Systems
Digital Futures Lab and Karya have come together for this groundbreaking project, which seeks to unearth and address gender biases perpetuated within agriculture themed AI models. TEMPMMC Members of the hall pay a courtesy call to Tembo who extended invitation to all MAM leaders at the event Malawi is acquiring recognition for incorporating technology in agriculture under its Feed The Future (‚FTF) and other allied agricultural programs making use Artificial Intelligence (AI) through USAID’s Data-Driven Farming activity project in which it ensures that AI tools used by people along the farming value chain are fair, inclusive and unbiased with regards to any gender when analyzing data inputs and responses.
Anvil: Penetration Testing and Evaluation Tool for Generative AI
A collaboration between Globals ITES and IIT Dharwad, Anvil will develop cutting-edge penetration testing tools for large language models (LLMs) and generative AI systems. It will also prioritize assessing the safety and robustness of homegrown AI models, ensuring their quality is up to high standards.
ALSO READ: Realme 15 Pro Game Of Thrones Edition To Debut Today: Check Features, Specs And Price In India
Building a Safe and Transparent AI Ecosystem
These projects form the next wave of building a trusted, secure and inclusive AI ecosystem in India, as per IndiaAI Mission. The goal of the mission is to encourage tools that can either detect manipulation, mitigate bias or improve trust in AI systems.
Conclusion
The IndiaAI Mission’s support for these five projects will strengthen India's leadership in responsible and ethical AI innovation. Whether it’s guarding users against deepfakes, combatting gender bias or enhancing model security, these initiatives will be key to the development and deployment of safe and responsible AI in the country.