Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has announced the launch of his own artificial intelligence (AI) company, Safe Superintelligence.
The new firm will focus on safety, with Sutskever stating that building safe AI is "our mission, our name, and our entire product roadmap". The company's launch statement on its website emphasised a commitment to approaching "safety and capabilities in tandem" as "technical problems to be solved", pledging to "advance capabilities as fast as possible while making sure our safety always remains ahead".
This comes amid criticism that major tech and AI firms are prioritising commercial benefits over safety principles a concern voiced by several former OpenAI staff members when they left the company. Elon Musk, another co-founder of OpenAI, has also accused the company of straying from its original mission of developing open-source AI in favour of commercial gain.
In what seems to be a direct response to these concerns, Safe Superintelligence's launch statement declared: "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
Mr Sutskever, who was part of the high-profile attempt to remove Sam Altman as OpenAI's chief executive last year, left the company in May after being removed from the firm's board following Mr Altman's quick return. He now finds a home at Safe Superintelligence, alongside ex-OpenAI researcher Daniel Levy and former Apple AI lead Daniel Gross, both of whom are co-founders of the new firm with offices in California and Tel Aviv, Israel.
Elon Musk makes history by becoming the first person in the world to lose $200bnReferring to the new venture, they claimed: "It's the world's first straight-shot SSI (safe superintelligence) lab, with one goal and one product: a safe superintelligence." They said it was "most important technical problem of our time".