Meta, Big Tech peers, Nasscom, experts… all thumb down India’s AI draft rules – Crypto News – Crypto News
Connect with us
Meta, Big Tech peers, Nasscom, experts… all thumb down India's AI draft rules Meta, Big Tech peers, Nasscom, experts… all thumb down India's AI draft rules

Metaverse

Meta, Big Tech peers, Nasscom, experts… all thumb down India’s AI draft rules – Crypto News

Published

on

Social media platforms said tagging all algorithmically generated content before sharing will require enormous scrutiny, escalating operating costs. In their submissions to the ministry of electronics and IT (Meity), they called the proposals too broad-based, and requested adding definitions of disinformation, deepfakes and harmless or harmful content to AI rules.

The rules will “make it impossible for content to be posted, shared and consumed the way they are on social media in India today,” said an executive close to Meta Platforms, which runs Facebook and Instagram in India. “Tagging every piece of content with watermarks and metadata, which too are not fail-proof processes and can be meddled with, is a massive commercial exercise. Through these rules, India will not only risk becoming the most expensive nation to run social media platforms on—it will also be near-impossible to make social media platforms work the way they do currently,” the executive cited above said.

The Union ministry of electronics and information technology (MeitY) released draft rules on 22 October to amend the IT Rules, 2021, aiming to regulate deepfake content by requiring social media platforms to watermark and tag algorithmically modified content. It also proposed to hold these platforms as well as AI generators such as OpenAI’s ChatGPT and Google’s Gemini accountable for any unidentified deepfakes. The last date for filing feedback to the rules passed on Thursday; however, given the scale of feedback received, the ministry has extended the deadline to 13 November.

AI crossroads

A senior Nasscom official said the right approach would be to focus on assessing AI content based on their impact and harm, instead of going by the process through which AI content is generated. Its submission is in line with Big Tech’s suggestions that analyzing every piece of content based on whether they are algorithmically edited or not would be difficult to implement. Nasscom is set to submit its official responses on behalf of industry stakeholders, which includes Meta and Google.

Google has yet to make a submission. “There have been industry conversations where we’ve raised our view, but a formal submission will only be made in the coming days,” an executive aware of the company’s plans said.

The executive also pointed Mint to YouTube’s new content policy, which outlines what qualifies as significantly modified synthetic content, and what doesn’t. On 21 October, YouTube also expanded its early-stage trials of AI likeness detection—which the platform seeks to use to crack down on deepfakes.

The first executive associated with Meta added that the company believes a voluntary reporting-based assessment of deepfakes that lead to misinformation is the regulatory approach they’ve recommended as part of their submissions. “If there is content that is problematic, that is likely to do with current affairs, commerce, politics and other such areas. Here, we’ve always seen users actively report content that could be misleading. This is the right way for us to regulate AI, instead of including all content in the same bucket,” the executive added.

Questions sent to Meta, Google and Meity remained unanswered.

The official cited earlier said the feedback has “not been looked into yet”, adding the ministry is aware that there is a lot of opinion and feedback to the rules.

Industry stakeholders said that there is scope for more conversations.

The industry believes that the rules sought to simplify “a largely complex subject”, said Deepro Guha, associate director for public policy at consultancy firm The Quantum Hub. “But, the key thing to note here is that the main point of contention is how AI regulations can be implemented on ground. Companies do have a point in stating that algorithmically modified content could include absolutely anything, and that makes regulations difficult to implement. The definition is too wide—if Meity is targeting deepfakes, the rules could have kept their ambit within audio-visual content,” Guha said.

However, others said that while some modification may be needed, the regulation is a necessity today. “The ideal form of regulation would be for the Centre to lay down outcome-based rules, rather than focus on the mechanism and process. For instance, AI regulations should focus on content that gets reported as disinformation—and offer stringent steps for platforms to actively scan for such content and curb them,” said Anushree Verma, senior director analyst at Gartner.

Verma added that companies are sure to push back, “since such a move does hamper the commercial usability of AI.” “But, for the most part, a regulatory restriction is necessary for both generative platforms and social media intermediaries,” she said.

Guha said the current recommended rules will require “substantial technical changes” for the platforms. “The rules take the responsibility of identifying deepfakes from the AI content generation platforms to social media intermediaries themselves. This means that significant social media intermediary platforms will now need to have some sort of a technical way to identify if a content is really AI-generated or not, if a user declines to admit in voluntary disclosures. This adds a major complication in terms of compliance for these platforms—all of which could be up for debate.”

Key Takeaways

  • Big Tech and Nasscom say MeitY’s draft AI rules are too broad and costly to implement.
  • Platforms argue tagging/watermarking all AI-modified content will raise operating costs and slow content sharing.
  • They want clear definitions of deepfakes and disinformation, and a harm-based approach instead.
  • The draft rules propose platform liability (Meta, YouTube, Instagram, ChatGPT, Gemini, etc.) for unidentified deepfakes—a move that platforms say is unworkable at scale.
  • Nasscom’s stance: Regulation should be harm-based and outcome-focused, not based solely on how content was generated.

Trending