The best methods we currently have for detecting and labeling deepfakes online are about to be stress tested. India on Tuesday announced mandates that require social media platforms to remove illegal AI-generated content much faster, and ensure that all synthetic content is clearly labelled. Tech companies have said for years that they want to achieve this on their own, and now they only have a few days before they are legally obliged to implement it. The rules will be effective from February 20.
India has 1 billion internet users who are young, making it one of the most important growth markets for social platforms. Therefore, any liability that exists there could impact deepfake moderation efforts around the world – either by pushing detection to the point where it In fact works, or forces tech companies to admit that new solutions are needed.
Under India’s amended information technology rules, digital platforms will be required to deploy “reasonable and reasonable technical measures” to prevent their users from creating or sharing illegal synthetically generated audio and visual content, aka, deepfakes. Any generative AI content that is not blocked must be embedded with “persistent metadata or other appropriate technical provenance mechanisms.” Social media platforms are also required to have specific obligations, such as requiring users to disclose AI-generated or edited content, deploying tools that verify those disclosures, and prominently labeling AI content in a way that allows people to immediately recognize that it is synthetic, such as adding verbal disclosures to AI audio.
This is easier said than done, given how poorly underdeveloped AI detection and labeling systems currently are. C2PA (also known as Content Credentials) is one of the best systems we currently have for both, and attaches detailed metadata to images, video and audio at the point of creation or editing by invisibly describing how it was created or altered.
But the thing is: Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it’s clearly not working. Some platforms, such as Facebook, Instagram, YouTube, and LinkedIn, add labels to content marked by the C2PA system, but those labels are difficult to recognize, and some synthetic content that Needed Be aware that metadata is slipping through the cracks. Social media platforms cannot label anything that does not initially include provenance metadata, such as content produced by open-source AI models or so-called “nudified apps” that refuse to adopt the voluntary C2PA standard.
India has over 500 million social media users, according to research shared by DataReportal reuters. When broken down, that’s 500 million YouTube users, 481 million Instagram users, 403 million Facebook users and 213 million Snapchat users. It is also estimated to be X’s third largest market.
Interoperability is one of the biggest issues with C2PA, and while India’s new rules may encourage adoption, C2PA metadata is far from permanent. It is so easy to delete that some online platforms may inadvertently delete it during file upload. New rules order platforms No Allowing metadata or labels to be modified, hidden, or removed, but not much time to figure out how to comply. Social media platforms like X that have not implemented any AI labeling systems now have only nine days to do so.
Meta, Google and X did not respond to our requests for comment. Adobe, the driving force behind the C2PA standard, also did not respond.
Adding to the pressure, India has ordered that social media companies remove illegal content within three hours of being discovered or reported, instead of the current 36-hour time limit. This also applies to deepfakes and other harmful AI content.
The Internet Freedom Foundation (IFF) warned that these imposed changes risked platforms becoming “rapid fire censors”. “These impossibly short timeframes eliminate any meaningful human review, forcing the platform to automatically over-remove,” IFF said in a statement.
Given that the amendments specify provenance mechanisms that should be implemented “to the extent technically feasible,” the officials behind India’s order probably know that our current AI identification and labeling technology is not ready yet. The organizations supporting C2PA have long sworn that the system will work if enough people are using it, so this is a chance to prove it.
