Cofounder and former chief scientist of OpenAI, Ilya Sutskever, has raised $1 billion for his new tech startup that plans to address safety and ethical issues related to artificial intelligence (AI).
Sutskever founded his new startup Safe Superintelligence Inc. (SSI) in June, and the initial investments secured from renowned venture capital firms — including NFDG, a16z, Sequoia Capital, DST Global, and SV Ange — reveal how high the concern is about the risks associated with rapid advancements in AI.
The high demand for AI-related safety and guardrails is seen across the U.S. workforce. Nearly 75% of all U.S. workers are still worried about AI’s overall harm to job losses, and 77% of working adults report some lack of trust in businesses to use AI responsibly — with 44% expressing they have “not much trust” and 33% revealing they have no trust at all.
Despite having no product on the market yet, and being only three months old with 10 employees, Reuters reports that the investment values the tech startup at $5 billion. It’s reported that the company will use the initial investment to help recruit a team of talented researchers and engineers.
Several analysts believe the funding will provide the company with the necessary means to compete at the highest level in the field. The funding and subsequent valuation makes the company well-equipped to attract top talent in the niche and budding field of AI safety. It also represents how high the competition for talented AI developers and researchers has become since the initial launch of ChatGPT in Nov. 2022.
The specific focus on safety and ethical considerations distinguishes SSI from other AI developers including OpenAI, Google AI, Meta AI, and others. CNBC reports that Sutskever cofounded SSI along with Daniel Gross, who oversaw Apple’s AI and search efforts, and another former OpenAI employee, Daniel Levy.
For the broader workforce, the establishment of companies like SSI could mean more of an industry emphasis placed on AI safeguards directly influencing the future of work. This could alleviate some fears about job displacement and misuse of technology.
As other AI companies prioritize cutting-edge advancements, SSI’s focus on ethics could influence broader industry practices and create new roles centered around AI governance, safety, and policy.
Regulatory environments for AI are becoming increasingly stringent, and companies like SSI will be positioned to play a big role in shaping the future development of AI and its regulation. The startup’s commitment to responsible AI development could serve as a blueprint for other companies in the sector, potentially paving the way for safer and more ethically aligned AI technologies.
Sutskever left the board of OpenAI in November 2023 along with board members Helena Toner and Tasha McCauley. All three had previously voted to dismiss Sam Altman from the company’s board. The New York Times reports that the resigning board members were concerned about the rapid pace of AI’s development leading up to Altman’s brief ousting from the company.