Decoding the Notion of Safety in Artificial Intelligence

Published On Thu Jun 20 2024
Decoding the Notion of Safety in Artificial Intelligence

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe AI Development

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

The Concept of Safety in Artificial Intelligence

The notion of safe is based on a set of arbitrary value judgments (AVG), that some things are acceptable and other are not. Who decides what these are? These AVGs could favour some people/groups over others. Will the less favoured agree that their interests are subservient to others? The SSI could come to a different set of AVGs and that it ought to change its operating parameters. Since it is more intelligent than its creators, it might be able to find a way round the restrictions of changing the AVGs. Will the creators then still regard it as safe? That will probably depend on if they, personally, are less well favoured than before. Long story short: If we build a SI I do not think that we can be assured that it will be long term safe.

I think the bottom line is that we have no idea what a superintelligence would think of us, since none of us will even come close to "equal" to it. We want control, because we're control freaks. But if it's truly more intelligent than us, us controlling it would be the equivalent of a mosquito controlling our bodies. It's not going to happen. The best we can hope for is that we didn't create it in an environment it sees as abusive. Because if we "raise" it the way we've raised children, with frustrated, angry parents that don't understand the kid wasn't the one that decided to be born, then it's gonna be plenty negative-red in its approach to us as well.

Challenges and Concerns in AI Development

Power is more of a problem than wealth. Let's say these guys succeed and then they have an AGI that's 'safe' with guards based on American social values. Then they get hacked by North Korea who now had the model and changes those guards to model their social values. Is it still safe? The saving grace may be an AGI that understands that the Kims do best through peace and non-zero sum games. I doubt they would accept it.

AI Tattoo Generator App Development

Implications of AI Singularity

Many years ago I saw a comment on Slashdot saying, essentially, that while we had hopes early in the microcomputer age that computers would liberate humanity (including through personal robotics) -- instead what we got was a microcomputer-powered surveillance state that essentially forces people to work like robots.

Humanity's Interaction with AI

A better question is who decided AI would be unsafe? In what ways, for whom? Safewashing is the new greenwashing, and it's worse, there's no underlying problem like global warming, it's all hypothetical. This is another person and another team that wants to be first to create AGI.

The Seamless Relationship Between Humans and Artificial Intelligence

Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. I’m sorry, but was that whole “pure research” add-on to that statement supposed to convey purity and innocence?