Unveiling the Unseen Dangers of AI in Emergency Management

Published On Wed May 01 2024
Unveiling the Unseen Dangers of AI in Emergency Management

The Big Questions for Emergency Managers and AI - HAP

The modern era has pushed us all to implement new technologies before they are truly vetted for the limelight. From smartphones to tablets to wearable technology, the world—for better or worse—is increasingly interconnected and technological. We aren’t able to truly consider the big-picture questions until after we have transitioned into a new era of tech or a new device has hit the market.

We’ve seen this throughout health care, with artificial intelligence (AI) becoming part of patient care and facility operations. On the surface, we know AI can help us harness efficiencies and provide answers to help us get through the day. But the tech’s potential comes with a few key questions that we need to answer. Here are a few from an emergency manager’s perspective:

Who’s Watching Over AI?

One of the reasons we are seeing bad actors in this space stems from the fact there is not much, if any, regulation around AI. AI is governed by a mix of the federal government, state governments, the industry itself, and the courts. An executive order created late last year required every U.S. government agency, plus offices related to the President, to evaluate the development and use of AI; develop regulations for each agency; and identify and establish the use of AI for public-private engagement.

The face of tomorrow's cybercrime: Deepfake ransomware explained ...

Late last year, one of the industry leaders, OpenAI, created the most-referenced comprehensive preparedness framework for AI development. This framework allows users to identify what is allowed to be released, and what can be developed further based on tracked risk categories. It classifies risk at low, medium, high, and critical levels.

What’s real and what’s deepfake?

AI is being used in concerning ways. The technology has been deployed for cyberattacks and to disrupt day-to-day operations. But AI also can be used to mislead. A deepfake is an artificial image or video generated by a special kind of machine learning called “deep” learning. This technology uses machine learning and a specific algorithm to create something resembling what it has learned.

Deepfakes have become increasingly prevalent in image and video. If I feed an AI model constant images of myself or someone else, it can essentially recreate them. These tools can be used in cyberattacks or disinformation campaigns to elicit fear or in blackmail campaigns around negative publicity. AI can be used to disarm our sensibilities and put us at risk for scams and other digital attacks.

How is this playing out in real life?

This exact case happened about two months ago in Hong Kong. The cybercriminals were able to recreate senior managers and digitally redevelop their likenesses and voices, leading to a significant financial loss for the company. This case is being claimed as the first known case of customized deepfakes.

148 - Imagen 2, Midjourney on web, FunSearch, OpenAI 'Preparedness ...

What should you do?

The Open AI framework discusses so-called “unknown unknowns.” It is important to stay up to date with artificial intelligence trends and incidents, so we know what we are up against. Being open to this reality helps us prepare for the best—and worst—that’s yet to come.