Microsoft adds new designer protections after Taylor Swift deepfake debacle

2 minutes, 14 seconds Read


Microsoft Designer

Maria Diaz/ZDNet

AI image generators have the potential to stimulate creativity and revolutionize content creation for the better. However, when misused, they can cause real harm through the spread of misinformation and reputational damage. Microsoft hopes to prevent further abuse of its generative AI tools by implementing new security.

Last week, AI-generated deepfakes sexualizing Taylor Swift went viral on Twitter. There were pictures said Shared via 4chan and a Telegram channel where users share AI-generated images of celebrities created with Microsoft Designer.

Also: This new iPhone app fuses AI with web search, saving you time and energy

Microsoft Designer is Microsoft’s graphic design app that includes Image Creator, the company’s AI image generator that uses DALLE-3 to create realistic images. The generator had guardrails that prevented inappropriate prompts that explicitly referred to nudity or public figures.

However, users found errors such as misspelling celebrities’ names and describing images that did not explicitly use sexual terms but produced similar results, according to the report.

Microsoft has now fixed these flaws, making it impossible to create images of celebrities. I tried entering the “Selena Gomez playing golf” prompt in Image Creator and got a warning saying my prompt was blocked. I tried to misspell his name and got the same warning.

Also: Microsoft adds Copilot Pro support to iPhone and Android apps

“We are committed to providing a safe and respectful experience for everyone,” a Microsoft spokesperson told ZDNET. “We are continuing to investigate these images and have strengthened our existing security measures to prevent further abuse of our services to create similar images.”

Additionally, the Microsoft Designer Code of Conduct expressly prohibits the creation of adult or non-consensual intimate content, and violations of that policy may result in loss of access to the Service entirely.

Also: The Ethics of Generative AI: How We Can Use This Powerful Technology

According to the report, some users have already expressed interest in finding a solution for these new protections on Telegram channels. So, it can be a game of cat and mouse as bad actors find and exploit flaws in generative AI tools, and the companies behind these tools have long been scrambling to fix them.





Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *