- By Ashish Singh
- Thu, 05 Sep 2024 07:25 PM (IST)
- Source:Reuters
Safe Superintelligence (SSI), recently co-founded by Ilya Sutskever, the former head scientist of OpenAI, has raised $1 billion in funding to support the development of safe artificial intelligence systems that far surpass human capabilities, according to firm officials who spoke with Reuters.
SSI, which currently employs 10 people, plans to use the funds to acquire processing capacity and recruit top talent. Its primary objective is to put up a small, dependable team of engineers and researchers, split between Palo Alto, California, and Tel Aviv, Israel.
Those with knowledge of the matter claimed the company was valued at $5 billion, despite the company's refusal to reveal its valuation. The money shows that some investors are still willing to stake large sums of money in exceptional people conducting basic AI research. Despite the fact that these companies can become unprofitable for a while and have caused many startup founders to go to work for internet giants, there has been a general fall in interest in investing in them.
OpenAI Says ChatGPT's Weekly Users Have Grown To 200 Million; Details
Among the investors were renowned venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. NFDG, an investment partnership headed by SSI CEO Daniel Gross and Nat Friedman, participated as well.
"It's important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview.
Fears that rogue AI could work against human interests or perhaps lead to the extinction of humanity have made AI safety—the prevention of AI from causing harm—a popular topic.
SSI is building a straight shot to safe superintelligence.
— SSI Inc. (@ssi) September 4, 2024
We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.
We’re hiring: https://t.co/DmFWnrc1Kr
The industry is divided over a California measure that aims to put safety restrictions on businesses. Businesses like Google and OpenAI are against it, whereas Anthropic and Elon Musk's xAI are in favour of it.
Sutskever, a 37-year-old AI technologist, is one of the most significant. He co-founded SSI in June with Daniel Levy, a former researcher at OpenAI, and Gross, who managed AI initiatives at Apple.
Levy is the senior scientist, Sutskever is the head scientist, and Gross is in charge of funding and computer power.
Sutskever claimed that the impetus for launching a new company was that he "identified a mountain that's a bit different from what I was working on."
Elon Musk Urges California To Pass AI Safety Law Amid Growing Concerns Over AI-Generated Content
He served on the non-profit parent company of OpenAI's board last year when members decided to remove OpenAI CEO Sam Altman due to a "breakdown of communications."
After a few days, he changed his mind and signed a letter requesting Altman's return and the resignation of the board along with almost every OpenAI employee. However, the course of events reduced his contribution to OpenAI. In May, he resigned from the company and was kicked off the board.
Following Sutskever's exit, the business disbanded its "Superalignment" team, which aimed to maintain AI's alignment with human ideals in anticipation of the day when AI surpasses human intellect.
SSI has a conventional for-profit structure, in contrast to OpenAI's unconventional corporate structure, which was put in place for AI safety concerns but ultimately allowed Altman to be fired.
At the moment, SSI is putting a lot of effort into selecting individuals who will mesh well with its culture. According to Gross, they spend hours verifying if applicants have "good character" and are more interested in finding individuals with outstanding abilities than in placing too much emphasis on degrees and prior work experience.
"One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he continued.
To finance its computer power requirements, SSI says it intends to collaborate with semiconductor and cloud suppliers, though it hasn't chosen which businesses to do so yet.
Sutskever was a pioneer in the field of scaling, the theory that predicted AI models would perform better when exposed to enormous quantities of processing power. The concept and its implementation sparked a surge of AI investment in chips, data centres, and energy, establishing the foundation for innovations in generative AI such as ChatGPT.
Without providing specifics, Sutskever stated he would tackle scaling differently than his previous firm.
"Scaling hypothesis is all that is said. He said, "Everyone forgets to ask, what are we scaling?"
"Some people can put in a lot of overtime and still follow the same route more quickly. Not in the same way as our style. However, you can achieve something exceptional if you take a different approach."
