Meta, the parent company of Facebook, has announced its efforts to combat the spread of misinformation through AI-generated images. With the advancement of MetaGenerative AI, it has become increasingly difficult to differentiate between real and AI-generated images. In response, Meta plans to introduce new labels across its platforms, including Instagram, Facebook, and Threads, to indicate when an image has been AI-generated.
To establish common technical standards for identifying AI-generated content, Meta is collaborating with industry partners. By using these standards, Meta aims to issue labels in multiple languages on posts, clearly indicating that the image was created using AI. This labeling system will function similarly to TikTok’s AI-generated content labels, which were introduced in September.
Meta employs various methods to mark AI-generated images, including visible markers, invisible watermarks, and IPTC metadata embedded in each image. These images are then labeled with an “Imagined by AI” tag to signify their artificial creation. Additionally, Meta is developing tools to detect invisible watermarks in images generated by AI from different companies, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, and apply AI labels accordingly.
However, there is a potential loophole if companies do not comply with adding metadata to their AI image generators. In such cases, Meta will be unable to tag the image with the appropriate label. Despite this limitation, Meta’s efforts are seen as a positive step forward in addressing the issue of AI-generated content.
While Meta focuses on AI-generated images, the same level of effort has not yet been extended to AI-generated videos and audio. In the meantime, Meta is introducing a feature that allows users to disclose when they have used AI to generate an image, enabling the company to add a label accordingly. Failure to disclose this information may result in penalties imposed by Meta.
The development of these tools is particularly crucial in light of upcoming elections. The creation of believable misinformation through AI-generated content poses a significant threat to public opinion and the democratic voting process. Other companies, such as OpenAI, have also taken steps to implement safeguards ahead of elections.
In conclusion, Meta’s initiatives to label AI-generated images and encourage disclosure of AI usage are aimed at addressing the challenge of misinformation. As AI technology continues to advance, it is essential to establish transparency and accountability in order to maintain public trust.