Key Takeaways
- A “deepfake” is a AI-generated image, video, or audio file that replaces one person’s likeness and/or voice with another person’s.
- According to surveys, between 40 and 50 percent of students are aware of deepfakes being circulated at school.
- For the victims (mostly girls), the emotional and psychological impact can be severe and long-lasting.
When Francesca Mani was in 10th grade, she had to alert school administrators that sexually explicit photos with her face had been circulating around school—but it wasn’t her in the pictures. Boys in her class used artificial intelligence software to fabricate photos of her and other female classmates naked.
These photos are known as “deepfakes”—digitally altered images which are difficult to distinguish from a real person. Mani's case has been the most high-profile, but deepfake images have been making their way into middle and high schools across the country.
A deepfake, as defined in the NEA Policy Statement on the Use of Artificial Intelligence in Education, is an “AI-generated image, video, or audio file that convincingly replaces one person’s likeness and/or voice with another person’s.”
These deepfakes, along with revenge porn, doxxing and swatting have a severe impact on students’ everyday lives, namely their mental health. Girls are much more likely to be bullied at school than boys. Much of that bullying is online or in the form of texts.
Laura Tierney is the founder and CEO of The Social Institute, which educates people on responsible social media use. She says the number of AI deepfakes have been growing and more students—mostly girls—are being exposed to them.
A 2025 survey conducted by Incogni and the National Organization for Women showed one in four American women have experienced online abuse – with 9 percent of that being sexual abuse. However, 2 percent of women who have experienced online abuse has been impacted by deepfakes.
And a 2024 Education Week survey found that educators report that half of their students had been misled by deepfakes.
“What's important to recognize is that one photo posted online is all that's needed to create a deepfake so this is an issue that can impact nearly every student,” Tierney says.
The Impact of Deepfakes
“When [deep fakes] happen, schools don't have a policy in place, they're taken by surprise,” explains Riana Pfefferkorn, a policy fellow at Stanford, who researches the law and policy implications of AI. “Educators and administrators may not even know about the existence of these sorts of ‘nudify’ apps.”
Nudify apps are websites or phone apps that use AI to alter photos, usually to make nude images. All it requires is one photo—from social media or taken in person—to create this image. Many of these websites have an age requirement, but little or no verification process.
Some students may make these deepfakes as a joke or prank, or create them out of curiosity, although Pfefferkorn has seen some rare instances of a more organized approach.
“[One student] was making images of lots of girls in school and had folders on his device for like each kid with different images that he was making,” she says.
Sometimes a student doesn't have the “long-term reasoning or mental development yet to reason through the likely consequences of what they do.”
But Pfefferkorn says the impact on the victim is always grave— psychological damage, reputational harm, a harm to dignity, and a violation of privacy.
“There are students who are so severely psychologically affected that they have to change schools,” she says. “They may miss a lot of school or have to leave school abruptly in the middle of the day. They may be crying, depressed, unable to just keep up with activities of daily living.”
Pfefferkorn warns that the harm doesn't necessarily end with graduation. The photos can also follow the victim if it's on the web, with the potential of appearing when possible future schools or employment does a background check.
Regulating AI
Students can become aware of these programs through advertisements in games and social media.
“From what we understand from talking to platforms, it's like a total whack a mole game to try and find and take down all these sorts of ad campaigns,” Pfefferkorn says.
Even though several states have introduced bills to crack down AI-generated child porn, school leaders are not equipped with the knowledge on what to do when it occurs, including whether to report it, because it doesn't necessarily fall into the existing definition of child abuse.
“Schools and administrators and educators may not be sure if they're mandatory reporters under the laws that mandate the reporting of child abuse,” Pfefferkorn explains.
A 2024 survey done by Education Week, however, revealed 71 percent of teachers did not receive professional development related to AI.
In 2024, NEA produced a task force report which studied current and future AI use in classrooms and developed recommendations and guidelines for teachers using AI. One principle the task force identified was the ethical development and use of AI. This includes ongoing learning opportunities for educators to identify ethical AI dilemmas and how to effectively handle these dilemmas when they arise.
‘Accountability and Inclusion Combined’
Pfefferkorn says implementing restorative justice practices, which are community building principles designed to eliminate racial discrimination and keep students in school rather than prison, may be an effective way for schools to enforce accountability. For these instances, that could mean taking the affected student’s feelings into account, listening to what they want or the perpetrator and victim having a conversation.
“There may be restorative justice principles that may be more grounded in what the student wants to have happen rather than everything just being put into the hands of the police,” she says.
One resource schools could implement is an anonymous tip line, so students can feel safe reporting an incident involving them or a friend. Pfefferkorn also says it's important to center the victim, and to prevent it from happening.
“I think this needs to be built into conversations in the classroom about consent and trust and people's privacy and dignity and agency,” she explains.
And when a student is a victim of a deepfake, Laura Tierney believes it's important to ensure they are not isolated.
“As educators, continuing to foster a school that promotes inclusion after a deepfake incident—and making sure that you have accountability and inclusion combined—is so important”
Students are often afraid to tell parents or caregivers about incidents on social media, Tierney says, including those about deepfakes. For those affected by deepfakes, Tierney recommends what she calls the S.H.I.E.L.D. approach.
“Stop, take a moment, pause and avoid reacting impulsively and maybe engaging with the person who immediately sent that,” she says. “The second is to huddle and reach out to a trusted adult.”
Tierney then recommends informing the platform it was found on and trying to get it taken down, and collecting evidence, such as screenshots. Then, limit the poster by blocking them. The final step is to tell people the photo was fake.
“When ... a deep fake is shared, it's so easy for us to feel isolated and that there's nothing we could do, when there's actually so many positive moves a student can be making to make sure that they take care of themselves and their mental well-being,” she says.
Tierney advocates for including the situation in health classes to get students thinking about what they could do if they found a deepfake of themself or a friend.
“Education, I think, beats reacting any day, all day, and so that could be education about how to spot deep fakes and what to do, education about when AI is fair game versus not fair game.”