Meta to add ‘AI generated’ label to images created with OpenAI, Midjourney and other tools

06.02.2024

 

Meta has announced its efforts to identify and label AI-generated images shared on its platforms, particularly those created by third-party tools, in anticipation of the 2024 election season. The proliferation of artificial intelligence tools poses a threat to the information ecosystem, prompting Meta to take measures to enhance transparency.

In the upcoming months, Meta plans to introduce “AI generated” labels for images produced by tools from major companies such as Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. This labeling initiative is an extension of Meta’s existing practice of applying “imagined with AI” labels to photorealistic images generated using its in-house AI generator tool.

Meta, in collaboration with leading AI tool developers, aims to establish common technical standards, such as invisible metadata or watermarks embedded within images, to enable its systems to recognize AI-generated content made with various tools. The labels will be rolled out across Meta’s platforms, including Facebook, Instagram, and Threads, in multiple languages.

The announcement from Meta aligns with growing concerns from online information experts, lawmakers, and tech executives regarding the potential for AI-generated realistic images, coupled with the rapid dissemination power of social media, to spread misinformation and mislead voters ahead of the 2024 elections in various countries.

Acknowledging the significance of transparently labeling AI-generated content, Meta Global Affairs President Nick Clegg stated, “People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology.” Clegg emphasized the importance of transparency during the upcoming year, which includes several crucial elections worldwide, to understand user preferences and the evolution of AI technologies.

While Meta’s industry-standard markers for labeling AI-generated images will not be applied to videos and audio generated by AI yet, the company plans to introduce a feature allowing users to identify when shared video or audio content was created by AI. Users will be required to apply this disclosure for realistic video or audio that was digitally created or altered, with potential penalties for non-compliance.

Clegg added that in cases where AI-generated content poses a high risk of materially deceiving the public on significant matters, Meta may implement a more prominent label. The company is also actively working to prevent users from removing the invisible watermarks from AI-generated images, recognizing the importance of addressing potential adversarial actions in this evolving space. Clegg advised users to consider factors such as the trustworthiness of the account sharing content and identifying details that may appear unnatural when determining if content has been created by AI.

Other news

Amazon wants to host companies custom generative AI models

AWS, the cloud computing arm of Amazon, aims to establish itself as the preferred platform for companies to host and optimize their custom generative AI models.

Read More

OpenAI Agreed to Buy $51 Million of AI Chips From a Startup Backed by CEO Sam Altman

Documents show that OpenAI signed a letter of intent to spend $51 million on brain-inspired chips developed by startup Rain. OpenAI CEO Sam Altman previously made a personal investment in Rain.

Read More
en_USEnglish