Generative AI has made it possible to create realistic images that look like they were taken by a human, making it difficult to distinguish between what’s real and what’s AI-generated. As a result, Meta has announced several efforts regarding AI-generated images to help combat misinformation.
Tuesday meta announcement In a blog post that in the coming months, it will add new labels across Instagram, Facebook and Threads that indicate when an image was AI-generated.
Also: I just tried Google’s ImageFX AI image generator, and I was shocked at how good it is.
Meta is currently working with industry partners to define common technical standards that signal when content was created using generative AI. Then, using those signals, Meta is building a capability that issues labels in all languages on posts across its platform, with the image AI-generated, as seen in the photo at the top of the article.
“As the distinction between human and synthetic materials becomes blurred, people want to know where the boundaries are,” said Nick Clegg, META president of global affairs. “So it’s important that we help people know when they’re viewing photorealistic content created using AI.”
This labeling will work similarly to TikTok’s AI-generated content label, released in September, which refers to realistic images, audio or video in TikTok videos that have been AI-generated.
Also: Best AI Image Generator
Meta includes visible markers, invisible watermarks and IPTC metadata that are embedded in every image created using Meta AI’s photogeneration capabilities. The company then labeled those images “imagined by AI” to indicate that they were artificially generated.
Meta shared that it is developing industry-leading tools that can detect invisible watermarks such as IPTC metadata in images generated by various companies’ AI generators. These include Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, for including AI labels for those images as well.
Of course, this leaves a loophole for malicious actors. If the company doesn’t comply with adding metadata to its AI image generator, Meta will have no way to tag images with labels. Still, it seems like a step in the right direction.
Despite efforts by companies to incorporate signals into AI-generated images, similar efforts have yet to be made regarding AI-generated video and audio. Meanwhile, Meta is adding a feature where people can reveal that they used AI to create an image so Meta can add a label.
Also: The Ethics of Generative AI: How We Can Use This Powerful Technology
The company is enforcing voluntary disclosure by threatening to add fines if a user fails to disclose. The company also retains the ability to add more prominent labels to images, audio or videos that pose a particularly high risk of misleading the public.
“We will require people to use this disclosure and labeling tool when they post organic content with a photorealistic video or realistic-sounding audio that has been digitally created or altered, and we may apply penalties if they fail to do so,” Clegg added. ,” added Clegg.
The development of these tools comes at a particularly important time with elections on the horizon. It is easier than ever to create credible misinformation and can negatively affect public opinion of candidates and disrupt the democratic voting process. As a result, other companies, including OpenAI, have also taken steps to implement Guardel ahead of the election.