More than just a Free VPN

AI Technology

Ex-OpenAI Chief Scientist Launches ‘Safe Superintelligence’ Startup

Ex-OpenAI Chief Scientist Launches Safe Superintelligence Startup

Ilya Sutskever, OpenAI’s co-founder and former Chief Scientist, has launched a new AI venture named Safe Superintelligence Inc. Joined by former OpenAI colleagues Daniel Levy and Daniel Gross, the startup aims to develop a powerful AI safely.

Sutskever emphasized the unique focus of the company: “Our sole objective is to create a safe superintelligence, and we won’t pursue other products until that goal is achieved.” The company plans to tackle safety and capabilities simultaneously, ensuring advancements in AI do not compromise safety. “We aim to advance capabilities rapidly while keeping safety measures ahead,” the company stated.

Sutskever’s involvement in OpenAI’s leadership changes, including the temporary ousting of CEO Sam Altman, underscored his deep commitment to AI development. He previously predicted the arrival of superintelligence within this decade, a vision he now aims to realize through his new venture.

The startup’s minimalist website and open job positions indicate a drive to form a “high-trust, lean team” capable of groundbreaking work. Levy expressed excitement, stating, “Safe Superintelligence will be a high-trust team producing miracles.”

Sutskever clarified that by “safe,” he refers to safety on a grand scale, akin to nuclear safety, highlighting the company’s commitment to responsibly advancing AI.