Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Company
Ilya Sutskever, co-founder of OpenAI and former chief scientist, has taken a bold step forward in the artificial intelligence sector. Just one month after departing from OpenAI, Sutskever has introduced his latest project: Safe Superintelligence Inc. (SSI). This new for-profit company is set to address one of the most pressing issues in AI development: ensuring the safety of superintelligent systems.
Sutskever has teamed up with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy to establish SSI. Their goal is to create a superintelligent AI that surpasses human abilities while prioritizing ethical and secure development. Unlike OpenAI, which started as a non-profit organization, SSI is designed as a for-profit entity from the beginning.
In a 2023 blog post, Sutskever and Jan Leike, who co-led OpenAI’s Superalignment team, highlighted the potential emergence of superintelligent AI within the next decade. They stressed the importance of researching ways to control and restrict such powerful systems. Sutskever’s dedication to this cause is evident in his tweet announcing SSI’s formation, where he stated, “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.”
SSI’s strategy is both ambitious and pragmatic. By advancing AI capabilities while ensuring safety, they aim to grow without the distractions of excessive management or short-term commercial pressures. Sutskever has chosen not to disclose details about funding or valuation, suggesting that SSI has strong financial backing, driven by interest in AI and the team’s exceptional expertise.
SSI’s approach sets a new paradigm in AI development. Their focus on combining safety with advanced capabilities reflects a comprehensive understanding of the challenges and opportunities in the field. This dual focus could make SSI a leader in the AI industry, influencing how future AI systems are developed and deployed.
The establishment of SSI has significant implications for the AI community and beyond. As AI continues to evolve, the importance of developing safe and ethical superintelligent systems cannot be overstated. SSI’s work could pave the way for more responsible AI innovations, impacting various sectors including healthcare, finance, and education.