YouTube is developing two AI-based deepfake-detection tools to help creators combat unauthorized use of their voices and faces. The first tool, integrated into YouTube’s Content ID system, focuses on detecting AI-generated singing voices, mainly targeting musicians.
It aims to prevent deepfake songs mimicking artists like Drake or Taylor Swift, although it’s unclear if it will effectively protect lesser-known musicians.
The second tool will help public figures like influencers and actors flag AI-generated videos using their faces. However, YouTube has not confirmed whether it will proactively remove such content without manual reporting.
Although YouTube’s updated privacy policy allows anyone to request removal of deepfake impersonations, the platform has not addressed whether it will extend this detection to non-famous individuals or scam videos, which have been a growing issue.
YouTube says its goal is to support creativity, not replace it, as deepfake threats continue to rise.