OpenAI reportedly possesses a tool capable of identifying text generated by ChatGPT with 99.9% accuracy, but has kept it private. According to The Wall Street Journal, the tool has been ready for about a year but remains unreleased due to internal debates. The company is torn between transparency and attracting users.
The tool embeds an invisible watermark in ChatGPT-created text, detectable by specialized software. Concerns include potential circumvention via translation and the risk of the watermarking technique being compromised if widely accessed. There’s also worry that non-native English speakers could be unfairly affected.
Despite the internal debate, proponents argue that the benefits of releasing the tool outweigh the drawbacks. They believe it could help curb misuse, such as academic cheating. Meanwhile, Google’s similar watermarking tool, SynthID, is in beta testing but also not publicly available.
As the discussion continues, the decision remains on whether the greater good of transparency and integrity will prevail over potential user deterrence and technical challenges.