10 Catchy Titles for AI Safety Enthusiasts

Published On Sat Jun 22 2024
10 Catchy Titles for AI Safety Enthusiasts

How Safe Should Superintelligence Be? - Localogy

The whole AI boosters vs. doomers safety debate is a bit tired. I mean the robots that will subjugate us haven’t even been built yet. Or have they?

One of the key figures in the AI landscape, Ilya Sutskever, previously associated with OpenAI, has now ventured into a new startup with a focus on AI safety.

Daniel Gross: The 100 Most Influential People in AI 2023 | TIMEA key AI figure who has broken away from the pack seems to have had the robots or some other dystopian vision in mind when he conceived his new startup.

A key AI figure who has broken away from the pack seems to have had the robots or some other dystopian vision in mind when he conceived his new startup.

The Mission of Safe Superintelligence (SSI)

The nature of the “something else” that Sutskever is pursuing is what’s most interesting here. The very presence of Sutskever’s Safe Superintelligence (SSI) appears to be saying the following. AGI is coming. Maybe sooner than you think. So let’s get ahead of it with a safer approach to building these models.  Or something like that.

At Open AI, Sutskever led the Superalignment team, which was tasked with making sure AI development was on the right path. The team was dissolved after Sutskever left Open AI. His clash with Altman reportedly involved the degree to which guardrails should be applied to OpenAI’s development efforts. 

OpenAI co-founder launches “Safe Superintelligence”, a new AI ...“Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” Sutskever (and his co-founders) said while announcing the launch on X.

Focus on AI Safety

Sutskever seems confident there is a business in AI safety. So what does SSI do? If you go to what appears to be the SSI Inc. website, all you will find is a letter, dated June 19 and signed by founders Sutskever, Daniel Gross, and Daniel Levy.  It’s the same message that was posted on X. 

Gross formerly led Apple’s AI efforts. Levy is an Open AI alum.

It goes on to say, “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

Samsung Showcases AI-Era Vision and Latest Foundry Technologies at ...So it seems that SSI wants to make advancements in AI safety that big companies will embrace. Why? Maybe so they can credibly claim that they are using AI safely. Or something like that. 

Conclusion

I often say that the amount of money a company raises is the least interesting thing about it. I am not sure if this statement holds up in this case. It will be very telling to see how much SSI raises, how quickly, and from whom?