The Hidden Dangers of the Viral AI Trends for Kids' Privacy

Published On Sun Apr 20 2025
The Hidden Dangers of the Viral AI Trends for Kids' Privacy

Why That Cute AI Trend Isn't Worth Your Child's Privacy

You might’ve seen the viral doll trend that’s doing the rounds where you can create a toy version of yourself using AI, complete with accessories that represent your interests, and then share it on social media. There’s also been the Studio Ghibli trend where you can create personalized images purportedly in the style of the animation house, and before that, the yearbook AI trend. You can even share photos of kids with a view to seeing what they might look like when they’re older. Reports suggest ChatGPT, owned by Open AI, saw a record number of users this year thanks to the rollout of its image generator, which prompted the tech company’s boss Sam Altman to ask people to “please chill” as its graphics processing units were struggling to keep up with demand.

Caution Against Uploading Kids' Photos

With such tools gaining huge popularity, experts are cautioning against uploading photos of little ones to them. In a new reel, Dr Madhumitha Ezhil – who runs The Screenfree Parent Instagram account – opened up about how uploading children’s photos to AI tools “feels harmless”. But she added that when we do this, we are giving an AI company “our child’s face – to store, to study and to learn from”. “And now they may be able to accurately predict how a child may look like in future,” she added, “and that’s not just impressive, it’s also very dangerous. Their faces may be used to train facial recognition systems, build eerily realistic deepfakes or even be sold to an unknown third party.”

Privacy Concerns and Risks

Dr Ezhil said in her video: “We are the first generation to be raising kids in the age of AI – and it is my personal opinion that it is better to err on the side of caution because once we upload their data, we may never get it back.” HuffPost UK contacted Open AI about Dr Ezhil’s concerns and they declined to comment. ChatGPT users do have some control over how data is used, as Dr Ezhil mentioned in the caption for her video. There are self-service tools for people to access, export, or delete personal information, and you can opt out of having your content used to improve and train AI models.

Over-sharenting, what are the risks involved?

AI Models and Child Exploitation

HuffPost UK also understands the AI platform doesn’t actively seek out personal information to train its models. So, public information available on the internet isn’t used to build profiles about people or sell their data. The same can’t be said for other AI models, however. ChatGPT also doesn’t allow photorealistic edits to images of children, however people can upload images of kids to the tool (I was able to upload a stock photo of a baby and ask the tool to generate an image of what they might look like at age 25, which it was able to do in a few minutes).

Expert Opinion on Child Privacy

Dr Francis Rees, a lecturer in law who is spearheading the Child Influencer Project, told HuffPost UK: “If you think about putting something through on ChatGPT, you’re giving it a lot of information – you’re giving it facial recognition ability and the identity of the child, but also anything in their background such as school uniform crests, pets, their bedroom, their house number, that sort of information, as well as any metadata from GPS or the phone itself. Parents might not understand that that’s what is happening when they’re feeding it into the machine, effectively.”

AI-Generated Child Female Humans | Generated.Photos

Sharenting and Deepfakes

Sharenting is where parents share photos, videos, and details of their children’s lives online – often on social media, where they might have a number of followers who they don’t even know. Images of children shared online can be used to create sexually explicit deepfakes – fake audio, images, and videos that have been generated or manipulated using AI, but look and sound like genuine content. According to Internet Matters, 13% of teens have had an experience with a nude deepfake – and they’re the ones we’re aware of. Many parents will simply not know if their child’s content has been taken and used in a nefarious way. Images – whether real or fake – can then be used to intimidate or blackmail victims. There’s also the risk of identity fraud in the future.

Conclusion

Ultimately, parents are the privacy guardians for their kids – so it’s your choice. But it’s wise to be mindful of the risks. “I think there’s just not informed consent there,” said Dr Rees of sharing children’s images to AI models. “Because children, even if they said they were OK with it, they wouldn’t have the ability to understand the ramifications of it. So parents as the privacy guardians have to be aware of that. Think about: why would you be posting? What are you gaining from it? And what harm could it do? Who needs a photo of my child? Who needs to know about my child’s bedroom? Who needs to know about my child’s pet? Why would I be sharing that on an open public platform or feeding it into a machine where that could be shared... I think it’s an important consideration for parents to take a beat and really interrogate themselves as to why they would be doing that.”

How Can Blockchain Help in Combating Deepfake Issues?