Meta’s Oversight Board has criticized the platform for failing to promptly remove a naked AI deepfake of a real woman from India. The incident revealed weaknesses in Meta’s reactive approach, which relies on media reports to flag harmful content. The board emphasized that many victims of deepfake intimate images, especially those not in the public eye, are left vulnerable.
The board noted that explicit deepfakes violate Meta’s policy against “derogatory sexualized Photoshop or drawings.” However, they recommended rephrasing the rules to focus on the lack of consent and broader terminology to cover various altered images. Additionally, the board suggested moving the policy to the “Adult Sexual Exploitation Community Standard” for clearer guidelines.
Criticism also targeted Meta’s automatic closure of user reports within 48 hours if unaddressed, as seen in the delayed removal of the Indian woman’s deepfake. This policy may have significant human rights implications, according to the board.
Public comments and recent legislative actions, such as the Defiance Act and the TAKE IT DOWN Act, highlight the need for social media platforms to be proactive in combating deepfake content. These measures aim to better protect victims and hold perpetrators accountable.