Former OpenAI co-founder and chief scientist, Ilya Sutskever, has announced the launch of his new artificial intelligence company, Safe Superintelligence. The company aims to create a safe AI environment, particularly in light of the ongoing generative AI boom and the growing dominance of major tech companies in this field. With offices in Palo Alto and Tel Aviv, Safe Superintelligence describes itself as an American firm.
Sutskever, who played a crucial role in the removal and subsequent rehiring of OpenAI CEO Sam Altman last November, left the Microsoft-backed company in May. Following Altman’s return, Sutskever was removed from OpenAI’s board before ultimately departing the company.
Joining Sutskever in this venture are former OpenAI researcher Daniel Levy and Daniel Gross, co-founder of Cue and a former AI lead at Apple. The team’s singular focus on AI safety is intended to eliminate distractions caused by management overhead or product cycles. Moreover, their business model ensures that safety, security, and progress remain insulated from short-term commercial pressures.
Safe Superintelligence’s launch comes at a time when the AI industry is witnessing significant advancements and increasing competition. By prioritizing the development of a safe AI environment, Sutskever and his team aim to address potential risks and challenges associated with the rapid growth of AI technology.