Online safety / August 07, 2024

Talking to kids about deepfakes

Matthew Johnson

Matthew Johnson

Director of Education, MediaSmarts

Teenager using smartphone

Have you seen an image or video recently that you thought was real, but turned out to be fake? Most of the fake images online are still simple manipulated photos, or real images presented with a misleading context, but more and more are deepfakes – false but convincing images of real people and things, like the Pope wearing a puffer coat or images of Katy Perry and Rihanna attending the 2024 Met Gala. (The Katy Perry image was convincing enough to fool her own mother.)

Deepfakes, like other types of generative AI, are created using advanced algorithms called diffusion models. These models start by adding random changes to an image until it becomes unrecognizable. Then, they test millions of ways to reverse these changes and restore the original image, a process known as reverse diffusion. Once the AI learns how to do this, it can create a digital model of a person that can be easily posed and modified, similar to the "digital doubles" used in Hollywood movies.

While a few years ago it took a fair bit of technical skill to make a deepfake, today the tools for making them are widely available and easy to use. Although the most popular tools have settings that make it hard to use them for harmful purposes, as is the case with many technologies, some individuals find a way to circumvent them. Unfortunately, there are also deepfake tools available that don’t have such restrictions – and in some cases, have been made specifically to do harm.

We need to make sure that youth know about deepfakes and how to deal with them, but we also have to be careful: making people more aware of deepfakes, without giving them the tools to identify them, can backfire by making them more skeptical of both false and true content. Because teens base their ideas of what’s right and wrong largely on what they think is normal among their peers, we also have to be careful not to make malicious deepfakes seem more common or widespread than they really are.

How can we spot deepfake disinformation?

We can’t rely on what our eyes tell us because a lot of the clues we rely on, like uneven eyes or extra fingers, are much less common in images made with more recent deepfake tools. More importantly, if we take this approach we are bound to find reasons to believe a real image is a deepfake if we don’t want it to be true. As Eliot Higgins, founder of the investigative journalism site Bellingcat, puts it, “The true threat of AI generated content is not that it will convince us to believe things that aren't true, it's that we won't believe anything unless it reinforces what we already think is true, or we'll just disengage completely because the truth seems impossible to find.”

Instead, we should teach youth to rely on information sorting techniques such as those outlined in MediaSmarts’ Break the Fake program. These techniques turn to other sources that are harder to fake. For instance, using a reverse image search tool on a real photo of Rihanna at the 2023 Met Gala leads to the “Today Show” and other reliable sources. Searching for the fake 2024 photo only leads to social media posts and to articles telling you it’s a deepfake, while other methods of looking for the original source will usually lead to social media or online forum posts from people who aren’t likely to have taken the photo. We can also turn to professional fact-checkers (you can search more than a dozen at once using MediaSmarts’ fact-checker search) which actually do have the technical knowledge to verify or debunk a photo. Finally, any image or video that would be big news – like a politician or a celebrity doing something shocking – is likely to be covered by reliable news outlets. If none of them are covering it, it’s better to wait before you believe or share it.

Intimate deepfakes

While deepfake disinformation has gotten the most media attention, researchers believe that intimate deepfakes – which show someone partly or fully naked, or engaging in a sexual act – are actually much more common. There are two types of widely available tools that make these: “nudify” apps which change a photo to make the subject look naked, and “face swap” apps that put a person’s face on an existing nude photo or video. More and more, though, AI tools can use multiple photos of a person to create more believable images or even videos.

While there have been situations where dozens of girls and young women in a single community were victimized – in the last year, there have been such cases both in Winnipeg and in London, Ontario – the research suggests that overall, relatively few young people have been victims of intimate deepfakes. While we do need to talk to young people about this issue, we also need to avoid making it seem more common – and, to teens’ eyes, more normal and acceptable – than it really is.

So what should we tell youth about intimate deepfakes?

While it’s good to encourage them to use privacy settings to limit who can see what they post online, that’s not an effective way of preventing deepfakes because they are often made by peers who likely will still have access. And of course, it’s not reasonable to ask young people not to post any photos or videos.

Instead, we should tell them the same things we do about sexting. Whether it’s a deepfake or a real photo, your children need to know that they can come to you for help if they find out an intimate image of them is being shared. Kaitlyn Mendes, a professor at Western University who specializes in gender inequality, says that "We want to get rid of the shame, because I can tell you [after] talking to lots of teenagers, they rarely go to adults when things go wrong because they're scared of being told 'You're an idiot. This is your fault. Your life is over.'"

When they do come to us, we can help document what happened and send removal requests to any places where the photo or video appeared. (MediaSmarts’ tipsheet Help! Someone Shared a Photo of Me Without My Consent has more on how to do this and Canadian Centre for Child Protection’s Need Help Now tool can also offer assistance.)

You can also work with your kids to investigate legal options for taking the deepfake down. At the time of writing, there are laws against making intimate deepfakes of anyone in BC, New Brunswick, Prince Edward Island and Saskatchewan. Making and sharing intimate images of people under 18, even if they are made with AI, is also a criminal offense.

We need to talk to our kids about the ethics of making deepfakes, too.

TELUS Wise and MediaSmarts’ #HowWouldUFeel campaign provides tools for countering the different excuses that kids often use to justify sharing sexts, and people use the exact same excuses to pretend that it’s okay to make and share intimate deepfakes. For instance, a 2020 study found that people who make intimate deepfakes justify it by minimizing the harm done, treating it not as a moral issue but as a technical challenge to make deepfakes that are realistic and convincing. It also found that making and sharing deepfakes had become normalized within that community.

Our kids need to understand that intimate deepfakes – as well as other deepfakes that show people doing things that are embarrassing or could get them in trouble – aren’t “victimless” but do harm to the people in them. For instance, a survey of 16,000 people in 10 countries found broad agreement that making and sharing intimate deepfakes without a person’s consent was harmful, while a study of teens aged 13-16 found that “sharing nude images or videos is always harmful to young people involved.”

People who have been victims of intimate deepfakes have also shared the impact it had on them:

  • “People say, 'It's not real, get over it,' but it's still embarrassing.”
  • “Everywhere we would go, everyone you’d walk by, they’d be talking about it."
  • "Even if it was debunked as a fake image, the subject would still get bullied and made fun of."
  • “I feel like it's a picture of an assault."

Another attitude that we need to counter is that deepfakes don’t do harm because they’re not real, but just a fantasy. There are two reasons why this isn’t true. The first is that a deepfake doesn’t stay in your head: other people see it. The other is that what it shows is private, intimate and likely to be embarrassing or humiliating. If you made a deepfake of someone winning an Olympic medal or baking a cake, it would be misleading but probably not embarrassing. A deepfake where someone is naked or engaged in a sexual act is completely different.

We need to be careful, though, not to let deepfakes make us fall for another excuse - blaming the victim If we treat the subjects of sexual deepfakes as “perfect victims,” because they never took or sent an intimate image themselves, it suggests that others who did take or send an intimate image are to blame if someone else shares it. When you’re talking to youth about deepfakes, be clear that it is always wrong to share an intimate image or video of someone without their clear consent – whether it was made with an AI or a camera.

To help your teens learn more about artificial intelligence, build critical thinking skills, identify common myths and dive deeper into AI ethics check out the TELUS Wise responsible AI workshop.

Tags:
Prevention & support
Share this article with your friends:

There is more to explore

Online safety

Gear up for back-to-school: online safety tips for parents and caregivers

Essential tips to help navigate online safety during the back-to-school season.

Read article

Online safety

#HowWouldUFeel

You may be relieved to learn that fewer youth take and send sexts (nude or semi-nude photos) than you may think. Learn more about #howwouldUfeel

Read article

Online safety

Cybertip.ca receives reports of extreme harm facilitated on Discord

Discover how predators are targeting youth and the potential harm they are causing.

Read article