• Source:JND

Last year, numerous harmful deepfakes surfaced online, negatively impacting the individuals depicted. Even celebrities were not safe, as they also got targeted where several fake videos of them kept going viral. Seeing such media going viral, Centre set up a committee to examine issues related to deepfakes and AI, following an order from the Delhi High Court. India has been examining the issues about the deepfakes and AI. Now Google, Meta and X have answered Delhi High Court about their take on deepfakes and AI. In this article, we will discuss the full story and Google, Meta, and X answers to Delhi High Court.

The Centre Panel

Last year, in November 2024, the Ministry of Electronics and Information Technology (MeitY) set up a committee of total 9 members to examine issues of deepfakes, following an order from the Delhi High Court. They held a consultation meeting with tech giants and policy and legal stakeholders on 21 January.

As the result of deepfakes rapidly increasing can be really dangerous. During a January stakeholder meeting, Google, Meta, and X shared with the Indian government that they have implemented policies to address manipulated media, a growing concern in the country. 

Google, Meta and X Reply to Central Panel on Tackling Deepfakes

As reported by Indian Express, 2 representatives from Google told the committee that they have had a policy for deepfakes since November 2023, and use artificial intelligence (AI) to take down manipulative content intended to cause harm. They also said that as per its policy on deepfakes, they ask creators to disclose synthetic content and also provide a label. And Google even has the process for “users to claim they are being used to create deepfake so that it can be taken down if their persona is being used.” 

Moving to Meta, they launched their AI labelling policy in April 2024. It allows users to label and disclose, when they are uploading AI content. They have started to label to take AI, deepfake or synthetic content. However, many of Meta’s policies are not dedicated to deepfake or AI, they are quite general. Moreover, the Meta representative told committee that they are “woking on protecting celebrities personas”.

Talking about X, its representative said it has “synthetic and manipulated media policy” where deceptive content are taken down. However, they also stated that for certain posts to be labelled, they need to be “extremely deceptive and harmful” as “not all AI content is deceptive in nature” and that “it is important to draw that distinction going forward”.

Google, Meta and X Replies to Centre Panel – Summary

Currently, Google is the only company with a system in place to handle situations where users report the misuse of their personas in manipulated media. Meanwhile, Meta is “working on” to protect “celebrity personas”. On the other hand, X emphasised that “not all Al content is deceptive in nature” and urged that “it is important to draw that distinction going forward.”

The stakeholders advocated for the implementation of regulatory frameworks that mandate the disclosure of AI-generated content, establish standardized labeling protocols, and provide effective grievance resolution mechanisms. However, it is crucial to prioritize the regulation of malicious actors exploiting deepfake technology, rather than imposing undue restrictions on its creative applications. 

The MeitY-constituted committee is expected to complete its consultation with stakeholders including victims of deepfakes within next 3 months.

That was it for this article, keep an eye out on Jagran English for more such updates!

Also Read : OnePlus 13 Mini: A Compact Powerhouse In The Making? Expected Specs, Price And Launch Timeline