About a month after he left the OpenAI, where he held out for a long time to be in the role of shkencëtarit key, Ilya Sutskever, and one of the bashkëthemeluesit of the company, a pioneer in artificial intelligence, has launched a new venture to be quite ambitious.
Together with Daniel Gross, a former partner at Y Combinator, and Daniel Levy, who has previously worked as an engineer for the OpenAI, Sutskever, established a Safe Superintelligence Inc. (SSI), and the company is on a mission to mirëpërcaktuar, crucial for the future of the human race: when you create a system for a safe and strong HE was.
At the time of OpenAI, Sutskever played a leading role in the efforts of the company in order to improve and the safety and security of IT, in particular with a focus on control systems, “superinteligjente” to IT, e.m.th., systems of intelligence are less than Human. This field of research was considered to be closer to Jan Leike, who later left the OpenAI to join the Anthropic one company to the other in front of HIM.
Expectations and fears for the future
In the year 2023, in a post on the blog OpenAI, Sutskever, and Leike expressed a preference for a provision as glamorous as it concerns the arrival of artificial intelligence, with the ability cognitive, higher than those of a human can take place within a decade.
However, they warned that the achievement of this sort, no matter how unusual, I will not be necessarily be in favor of the interior of IT to the people. This awareness has made the urgent search for ways to control and limit artificial intelligence is such an advanced and potentially dangerous.
In a post on Twitter, SSI describes clearly the mission of the its name, and all of the guidelines in the product, all of which are focused around the goal of achieving the “Superinteligjencës Safe”. The team of investors, and the business model is in line to be followed in this order, dealing with the security and capabilities of the IT as well as technical problems to be solved by inventions engineering and science.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problems of our time.
We've started the world's first straight-shot-SSI lab, with one goal and one product: a safe superintelligence.
It's called the Safe Superintelligence...
— SSI Inc. (@ssi) June 19, 2024
Strategy and daring to develop
One of SSI's as ambitious as it is essential: avanconi the skills IT as soon as possible, and to ensure that safety and security remain always on top of mind. In this way, the company is going to be in a position to escalate, with the rest, and not shpërqendruar of the overall costs of the management cycle of a product, or influence the trading of short-term, which could compromise the mission of the its key (can be such an excavation in the OpenAI?).
To pursue this mission-critical, SSI has established a presence in the global, with offices in Palo Alto, in the heart of Silicon Valley, and Tel Aviv, Israel, where the company is currently recruiting new talent agencies.
Discussion about this post