Under the new policy, ‘imagined with AI’ labels will be implemented on photorealistic images created using Meta’s AI feature. (Image: Reuters/File)
Under the new policy, Meta will begin labelling images created using artificial intelligence as “imagined with AI” to differentiate them from human-generated content
In a groundbreaking move, Meta – the parent company of Facebook, Instagram and Threads – announced a new policy aimed at addressing the growing concern around AI-generated content. Under this policy, it will begin labelling images created using artificial intelligence as “imagined with AI” to differentiate them from human-generated content.
Here are the key highlights of Meta’s new policy, which was announced on Tuesday (February 6):
- Implementation of ‘imagined with AI’ labels on photorealistic images created using Meta’s AI feature.
- Use of visible markers, invisible watermarks, and embedded metadata within image files to indicate the involvement of AI in content creation.
- Application of community standards to all content, regardless of its origin, with a focus on detecting and taking action against harmful content.
- Collaboration with other industry players through forums like the Partnership on AI (PAI) to develop common standards for identifying AI-generated content.
- Eligibility of AI-generated content for fact-checking by independent partners, with debunked content being labelled to provide users with accurate information.
What did Meta say?
In a blog post, Nick Clegg, Meta’s president of global affairs, said: “While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so, we can’t yet detect those signals and label this content from other companies. While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it.”
“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” he added.
Self-regulation and the government’s role
The announcement comes amid ongoing discussions between the ministry of electronics and IT and industry officials on the regulation of deepfakes. Minister of state Rajeev Chandrasekhar recently said it might take some time to finalise regulations.
Meta’s pioneering move marks the first time a social media company has taken proactive steps to label AI-generated content, setting a precedent for the industry. It is yet to be known whether other tech giants will follow suit.
But, experts believe that whether others will implement similar policies or not, government regulation is needed. This is because creators or other platforms might not follow suit, leaving a fragmented landscape with varying approaches. So, governments can establish clear definitions, address various types of deepfakes (face-swapping, voice synthesis, body movement manipulation and text-based deepfakes) and outline consequences of misuse.
Governments can create regulatory bodies or empower existing ones to investigate and penalise offenders. Additionally, since deepfakes transcend national borders, international collaboration can ensure consistent standards and facilitate cross-border investigation and prosecution.
Nilesh Tribhuvann, founder and managing director, White & Brief, Advocates & Solicitors said Meta’s initiative is commendable. With recent incidents, ranging from financial scams to celebrity exploitation, this measure is timely and essential.
“[But] governmental oversight remains imperative. Robust legislation and enforcement are necessary to ensure that all social media platforms adhere to stringent regulations. This proactive approach not only strengthens user protection but also fosters accountability across the tech industry,” he said.
Arun Prabhu, partner (head of technology and telecommunications), Cyril Amarchand Mangaldas, said: “Leading platforms and service providers have evolved responsible AI principles, which provide for labelling and transparency. That said, it is common for government regulation as well as industry standards to operate in conjunction with each other to ensure consumer safety, especially in rapidly evolving areas like AI.”