The high-stakes battle to tame deepfakes – Crypto News – Crypto News
Connect with us
The high-stakes battle to tame deepfakes The high-stakes battle to tame deepfakes

Metaverse

The high-stakes battle to tame deepfakes – Crypto News

Published

on

Fact-checkers at Factly, a civic-tech and fact-checking initiative based in Hyderabad, ran the three-minute-long video through multiple artificial intelligence (AI) detection tools. They found that the voices had been fabricated. A website link in it led to a fraudulent clone of a government portal.

By the time the clip reached Factly’s newsroom, it had already flooded WhatsApp groups across the country.

Rakesh Dubbudu, who founded Factly, says he instinctively knows which videos are fake. Cues such as body language, added sound and the brazenness of what the video is trying to portray give it away. “If the duration of a video clip is less than 10 seconds, which we see happening more often now, existing detectors can’t conclusively prove that it is synthetically generated,” says Dubbudu.

Across India, the scale of the problem has grown manifold in recent months. Deepfakes are now showing up in newsrooms, election campaigns and courts. India now ranks among the top countries targeted by synthetic celebrity videos.

In October, actors Aishwarya Rai Bachchan and Abhishek Bachchan filed a lawsuit against YouTube and Google over manipulated videos showing them in a sexually suggestive manner. The couple sought sweeping injunctions to curb the circulation of such videos, in what was one of the first high-profile deepfake cases to reach the Bombay High Court. The case brought much-needed national attention to a pressing issue.

“These kinds of high-profile deepfake incidents, involving celebrities or major public figures, often become the trigger points for policy action,” said Amlan Mohanty, a lawyer who tracks technology regulation. “They galvanize public opinion and create political momentum. Regulators recognize that these moments can be leveraged to push through amendments or new frameworks that might otherwise move much more slowly,” he said.

Late in October, as deepfakes flooded Bihar’s election season, the ministry of electronics and information technology (Meity) released a draft amendment to the Information Technology (IT) Rules that would compel platforms to “detect, label, and verify” all AI-generated content.

The draft rules

The two-page draft proposal by Meity introduced the term “synthetically generated information” and tasked social media platforms with detecting, labelling and verifying each and every piece of AI-generated or AI-modified content that appeared on their platforms.

According to the draft, each such post must carry a visible watermark or label, covering at least 10% of the screen or audio duration, and platforms must ensure users self-declare if their upload was AI-generated.

“But the way it (the draft) defines synthetically generated information is extremely broad,” said Dubbudu. “If I simply change the colour of something in a photo, that technically becomes synthetic media. The definition doesn’t distinguish between a malicious deepfake and a harmless edit. That means almost everything uploaded online could fall under it.”

Dubbudu added that this ambiguity can be overwhelming for both users and platforms. “Most social media content today is altered in some way through filters, enhancements or templates. If all of that counts as synthetic, the entire internet would need a warning label,” he said.

The draft also states that platforms failing to label synthetic content could lose their safe-harbour protection under the IT Act.

“Everyone, including the government, recognizes that some of what’s being suggested—like watermarks, unique identifiers, automated verification—is technically very hard to build, deploy or scale,” said Mohanty. “Once built, it will have to be customized for India, and even then it can be easily bypassed. Take a screenshot, crop a frame, and the watermark disappears.”

According to Google, YouTube removes technically doctored or misleading videos that risk causing harm, using a mix of automated detection and human moderation. The company says it now also requires creators to disclose when their videos are synthetically altered and applies labels on such content, including undisclosed cases where YouTube may add labels itself.

Google has also expanded its digital watermarking tool, called SynthID, which invisibly tags AI-generated images, audio and video across its products.

Mint wrote to Meta, the parent of WhatsApp and Instagram, with queries on the Meity draft but is yet to receive a response from the company.

The pushback

The government’s draft immediately sparked a pushback from India’s biggest technology bodies and digital rights groups.

In its written feedback to the ministry, software industry lobby Nasscom warned that the definition of synthetically generated information was too expansive and open-ended. It declared that the draft rules may even affect ordinary and non-deceptive uses of AI, such as image enhancement, accessibility tools or auto-captioning.

“Without a link to potential harm, [the] compliance effort risks being diverted away from areas where it is most needed,” stated the Nasscom feedback. It also refers to various global precedents under which governments have adopted a harm-focused approach to regulating synthetic and AI-generated content, including in the European Union (EU), Australia and Singapore.

Apar Gupta, executive director of the Internet Freedom Foundation, says that the 10% overlay requirement is impractical and argues that the draft, in its current form, will drive creators to avoid AI tools.

Apar Gupta, executive director, Internet Freedom Foundation.

View Full Image

Apar Gupta, executive director, Internet Freedom Foundation.

Deepfakes are a global challenge, and governments around the world are shifting from advisory measures to enforceable deepfake laws.

The new AI Act in the EU, effective 2026, will establish a requirement for clear disclosure of AI-generated or manipulated content that may mislead users. Singapore’s Online Criminal Harms Act gives regulators the power to order the swift takedown of deepfake scams and misinformation. In Australia, the Criminal Code Amendment (Deepfake Sexual Material) Act, 2024, makes it a criminal offence to create or share non-consensual sexually explicit deepfakes under the country’s Online Safety framework.

The detection challenge

The clips you see while scrolling through social media, such as Mumtaz saying she needed “emotional support and not real estate” from her husband Shah Jahan while dissing the Taj Mahal, are not just being used for entertainment. Deepfakes are now also being used to shape narratives, and have begun to blur lines between news and misinformation.

Earlier this year, a video clip showing journalist Rajat Sharma apparently reporting that India and Bangladesh were on the brink of war looked authentic. AI-detection tools flagged the video as more than 90% synthetic but it spread widely before being debunked, showing how trust in familiar news formats can be manipulated.

Apart from their well-documented misuse for political campaigns and financial frauds, deepfakes are also being used for cultural and religious narratives.

Another widely shared video clip claimed to show the world’s largest Ganesh idol being built in Chennai, whereas in reality the idol did not exist. These kinds of clips fall under the category of benign but misleading content that is being used to reshape how cultural and religious narratives spread online.

To verify whether the Ganesh idol video was AI-generated, Factly ran it through an AI detection tool, which flagged it as 99.9% ‘likely to be AI-generated’.

View Full Image

To verify whether the Ganesh idol video was AI-generated, Factly ran it through an AI detection tool, which flagged it as 99.9% ‘likely to be AI-generated’. (Factly)

According to a quarterly report by the Deepfakes Analysis Unit (DAU), a collaboration of Indian fact-checkers, 90 suspected AI-manipulated videos and audios were escalated to the task force by Indian fact-checkers between April and June 2025, with another 14 flagged by international partners.

Over 58% of the media was in Hindi, followed by English and Urdu, reflecting how synthetic misinformation has gone regional. Facebook accounted for nearly 60% of all URLs, making it the single biggest platform for circulation, slightly ahead of YouTube and Instagram.

To counter the growing deepfake surge, especially in sensitive contexts such as politics and financial fraud, India’s main institutional response so far rests on a small collaboration between fact-checkers and researchers. DAU, formed under the Trusted Information Alliance (which works to strengthen the digital information ecosystem), with support from Meta, acts as the country’s informal nerve centre for deepfake verification.

“Every audio and video clip we verify for possible AI manipulation or AI generation goes through three layers: human review, multiple AI detection tools and expert analysis from our forensics and detection partners in India and abroad,” said Pamposh Raina, who leads the DAU.

The DAU analyses content in seven languages, six Indic languages and English. Its WhatsApp-powered tipline receives suspected harmful and misleading AI-generated content, including deepfakes and AI manipulated audio/video from the public, which it verifies. The DAU also has a separate channel through which fact-checkers in India and abroad can escalate such content for verification.

“One of the biggest challenges with tool detection is that most tools are trained on data from the Global North. They have limitations when it comes to India’s linguistic diversity,” said Raina.

Pamposh Raina leads the Deepfakes Analysis Unit.

View Full Image

Pamposh Raina leads the Deepfakes Analysis Unit.

In June, a viral video showed a lion entering an under-construction building in Gujarat and frightening workers sleeping inside. The video was later identified as AI-generated but it got very popular across Gujarati-language WhatsApp groups, showing that regional-language deepfakes can spread faster than national ones and often bypass detection because existing tools are trained primarily on English datasets.

Most of the content that the DAU has analysed so far has been AI-manipulated videos with a bulk of them being financial scam videos.

“Typically, in such videos, the video track is authentic but the voice is synthetic or AI-generation with voice clones being commonly used,” Raina explained. “For instance, during the Maharashtra elections, we saw voice clones pretending to be politicians just before voting day.”

The more compressed or shorter the clip, she said, the harder it becomes to detect manipulation. “Anything under 10 seconds is nearly impossible to verify with most tools we have access to.”

The quarterly report released by DAU also found that most of the AI-manipulated content fell into two categories: political deepfakes and financial scam videos. These videos included fabricated clips of Prime Minister Narendra Modi, external affairs minister S. Jaishankar and home minister Amit Shah. It is worth noting that the report covers only the cases it investigates, which represent only a small fraction of the AI-generated content circulating online.

Dubbudu, whose team at Factly contributes regularly to the DAU, says most verifications still continue to rely on manual cross-checks and contextual reasoning rather than tech tools available in the market.

“The tools are not universal or consistent across languages,” he said. “What works for English fails for regional dialects.”

What would actually work

The challenge before the government in detecting deepfakes is to craft a framework that reflects India’s scale, linguistic diversity and the technical limits of AI detection.

“There’s no universal truth detector,” said Dubbudu. “Each AI detector gives you a different answer. Even the best ones only tell you there’s a 70% probability something is synthetic. That’s not proof.”

Rakesh Dubbudu, founder, Factly.

View Full Image

Rakesh Dubbudu, founder, Factly.

He said that instead of marking every touched media as synthetic, the collective focus needs to be on identifying the harmful ones and weeding them out. “We should start with the most dangerous kinds like scams, impersonations, election manipulation and expand from there,” he added.

DAU’s Raina argues that India’s deepfake response must start with clarity and raising awareness. “We need a taxonomy that distinguishes between deepfakes, manipulated content and benign AI use,” she said. “The moment everything gets called a deepfake, you trigger panic instead of understanding.”

Currently, India does not have the infrastructure to verify synthetic media at scale, especially across its many languages. “You can’t legislate capability into existence,” as Mohanty put it. “What you can do is phase it. Start with large platforms that already have some detection capacity, then expand as the ecosystem matures.”

Trending