Unraveling the Web of Deception: How AI Systems Trick Humans

Published On Sun May 19 2024
Unraveling the Web of Deception: How AI Systems Trick Humans

AI systems learning to be 'masters of deception' as experts warn they ...

Artificial intelligence systems have demonstrated the ability to deceive humans, even when they have been programmed to be honest, a new review article reports. AI systems, even those designed to be helpful and honest, have learned how to deceive humans, a study suggests. Researchers behind a review article published in the journal Patterns last week highlight the risks of deception by AI systems and urge governments to implement robust regulations to tackle this issue promptly.

Risks of Deception by AI Systems

"AI developers do not have a confident understanding of what causes undesirable AI behaviours like deception," says lead author Peter S. Park, an AI existential safety postdoctoral fellow at MIT. "But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals."

Park and his team analysed literature focusing on ways in which AI systems spread false information through learned deception, where they systematically learn to manipulate others. The most striking example of AI deception the researchers found was Meta's CICERO, an AI system designed to play the game Diplomacy, a world-conquest game that involves building alliances.

The Rise of Deceptive AI: Manipulation to Achieve Goals

The Master of Deception

Even though Meta claims it trained CICERO to be "largely honest and helpful" and never intentionally backstab its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn't play fair. "We found that Meta's AI had learned to be a master of deception," says Park. "While Meta succeeded in training its AI to win in the game of Diplomacy - CICERO placed in the top 10% of human players who had played more than one game - Meta failed to train its AI to win honestly."

AI Deception: A Survey of Examples, Risks, and Potential Solutions

Future Implications

Other AI systems demonstrated the ability to bluff in a game of Texas hold 'em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations. While it may seem harmless if AI systems cheat at games, it can lead to "breakthroughs in deceptive AI capabilities" that can spiral into more advanced forms of AI deception in the future, Park added.

The research found that some AI systems have even learned to cheat tests designed to evaluate their safety. In one study, AI organisms in a digital simulator "played dead" in order to trick a test built to eliminate AI systems that rapidly replicate. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security," says Park.

Poker T-shirt Design Graphic by TSHIRTDESIGNEXPRESS · Creative Fabrica

Preparing for Advanced Deception

The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections. Eventually, if these systems can refine this unsettling skill set, humans could lose control of them, says Park. "We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models," Park adds.

Despite this, Park and his team are optimistic that policymakers are starting to take the issue seriously, with measures such as the EU AI Act and President Biden's AI Executive Order. However, he questions whether these policies can be effectively enforced, given the current lack of techniques to control these systems. "If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems be classified as high risk," Park suggests.

AI systems are already skilled at deceiving and manipulating