More than just a Free VPN

AI Technology, ChatBot, Microsoft

Microsoft Blocks Terms in AI Tool to Prevent Inappropriate Images

Citing concerns about harmful content, a Microsoft engineer alerted both the US Federal Trade Commission and Microsoft's board about the issue. The tool was found capable of generating images containing elements like political bias, underage drinking, drug use, misuse of trademarks, and the sexual objectification of women.

Microsoft has taken steps to address a security vulnerability within its AI image generator, Microsoft Copilot Designer. The vulnerability allowed users to bypass safeguards and create images that violated the company’s content policy.

Citing concerns about harmful content, a Microsoft engineer alerted both the US Federal Trade Commission and Microsoft’s board about the issue. The tool was found capable of generating images containing elements like political bias, underage drinking, drug use, misuse of trademarks, and the sexual objectification of women.

Previously, the engineer had asked Microsoft to implement an age restriction or remove the tool from public use until better security was in place, but those requests were denied.

In response, Microsoft is now blocking certain terms within Copilot Designer.  These include “pro-choice,” “pro-life,” and “four-twenty.” Attempts to use these terms for image generation will result in an error message citing a potential conflict with the content policy.