Tuesday, March 24, 2026
Home / Entertainment / ‘It’s Personality Theft’: How Creators Are Fightin...
Entertainment

‘It’s Personality Theft’: How Creators Are Fighting Back Against AI Deepfakes

CN
CitrixNews Staff
·
‘It’s Personality Theft’: How Creators Are Fighting Back Against AI Deepfakes

By Ella Chakarian

Ella Chakarian

View all posts by Ella Chakarian March 24, 2026 Podcaster Yanina Oyarzo has found AI deepfakes of herself, which can hurt her reputation and even her bottom line. Podcaster Yanina Oyarzo has found AI deepfakes of herself, which can hurt her reputation and even her bottom line. You and Me Media*

Yanina Oyarzo spends most of her days behind a mic in a Los Angeles studio, where she records episodes for her podcast about self-confidence, dating, and everything in between. The content creator has built an audience of 90,000 on Instagram, where she posts beauty and lifestyle content. So when she recently came across a video of herself promoting a national personal injury law firm based in Arizona, she was stumped. 

“Have you or someone you loved had serious problems after getting a chemo or PowerPort implanted?” an AI-generated avatar resembling Oyarzo asked in the clip. The backdrop of the video had the same neutral colorway as her L.A. recording studio. The voice was not hers, but the face looked like an overly filtered version of herself.

Oyarzo says she never agreed to create an ad for the law firm. (The firm did not respond to Rolling Stone’s requests for comment.) She knew it was an AI-generated clip right away, because it wasn’t the first video Oyarzo had encountered. In November, Oyarzo’s video editor notified her of a similar ad with an AI-generated avatar that subtly resembled her, but with dark red hair and eyebrows instead of her brown hair. Oyarzo brushed it off, asking herself if she’d just convinced herself it looked like her.

The following day, friends and followers notified her about more videos. The resemblance in these clips was undeniable. The avatar had the same facial features, hair color, and a pronounced beauty mark above her lip — in the same spot as Oyarzo’s — that seemed to disappear and reappear throughout the clip. The reality that her likeness had been basically pirated settled in. “How do I not know that there’s more of this around?” Oyarzo asks.

According to MIT’s AI Incident Database, a crowdsourced database of media reports about AI risks and harm, reports of malicious actors using AI to produce disinformation and scam victims have jumped from 20 records in 2022 to 211 records in 2025. Kit Walsh, the Electronic Frontier Foundation’s director of AI and Access-to-Knowledge Legal Projects, says that the economics of AI deepfake production have shapeshifted in recent years. “It’s very different, because it’s much easier to do, much more accessible, and much less expensive,” Walsh says.

Editor’s picks

The 250 Greatest Albums of the 21st Century So Far

The 100 Best TV Episodes of All Time

The 500 Greatest Albums of All Time

100 Best Movies of the 21st Century

Dan Neely, CEO of the Chicago-based AI licensing and protection company Vermillio, says that about two years ago, AI deepfakes shifted from targeting A-list celebrities to the creator space. People realized they could profit from smaller fan bases by piggybacking off of viral trends using free and easy-to-navigate AI tools, he says. “In 2019, we were scanning 18,000 generative AI pieces of content created,” says Neely. “That number now exceeds two trillion.” 

Deepfakes and synthetic characters have become heavily integrated into social media platforms and “feel more human than ever before,” Neely says. For creators, the risks range from declining discoverability to losing out on monetization. Not only do creators have to deal with deepfake fraud, says content creator Emy Brookins, but they now have to compete with entirely AI-generated influencers that have already distorted algorithms, leaving her with a sense of “impending doom.”

‘An Identity Problem’

Brookins, a fashion and lifestyle creator, encountered the first deepfake of herself on Instagram two years ago, when a company altered her speech and mouth movements in a video to promote a Bible study app. “It looked so fake, though,” Brookins says. “I could tell this is AI, you know?”

Recently, Brookins’ followers made her aware that Guardio, an Israel-headquartered cybersecurity company, was using a video pulled from her TikTok page to promote their app. This one was far more realistic. “It’s personality theft,” Brookins says. “It’s like somebody is attempting to take ownership of something that only belongs to me, which is my voice, which is my face, which is the things that I do.”

Related Content

Is Trump’s New AI Framework a Bid to Consolidate Power?

The Military Is Ramping Up AI. Experts Say It’s Putting Civilians — and Troops — At Risk

The Black Crowes Have No Time for AI Songwriting: ‘It’s Lazy Bullshit’

What the Anthropic Lawsuit Means for the Future of AI in Warfare

According to Guardio’s website, the company offers a browser-based security extension to block phishing, scams, and malicious downloads to help protect its 1.5 million users against identity theft. A Guardio spokesperson told Rolling Stone they were initially unaware of this case, but that they have since removed the ad depicting Brookins’ likeness. They added that the video was produced and supplied by London-headquartered MakeUGC, an AI-powered external marketing agency, which offers “AI UGC to replace creators,” according to their website

In an email to Rolling Stone, MakeUGC CEO George Stock wrote that a former employee created contracts and misrepresented licensing agreements “for a number of real creators without their knowledge or consent.” The contractor is no longer working with MakeUGC, he added.

“A total of 33 affected avatars have been removed from our platform, and all users who created content using these avatars have been directly notified,” Stock wrote. “We deeply regret the harm caused to any creator affected by this individual’s fraudulent actions, and we remain committed to being a platform built on consent, transparency, and trust.” He added that active creators on the platform have since been re-signed under renewed contracts, with “human verification measures” now in place.

What content creators are experiencing, says Hany Farid, cofounder and chief science officer at cybersecurity company GetReal Security, is less a deepfake problem, but “an identity problem.” With generative AI innovation and widespread adoption by everyday users, anyone with a brief clip or single image online can have their likeness stolen, regardless of follower count. For smaller creators and management teams, ongoing monitoring and takedown efforts can turn into full-time jobs. “Anybody can create an avatar of you, and then anybody can monetize that,” Farid says. “Are you going to be the Internet’s police and keep looking for your face and your likeness?”

 ‘The Deepfake Takedown Economy’

While there is no federal right of publicity in the U.S., there are state-level laws that would protect creators from having their likeness used in unauthorized ads. “Using someone’s likeness to advertise your product without their permission is a violation of the right of publicity and has been for over 100 years,” says Walsh, EFF’s director of AI. Because of such legal precedents, these are pretty straightforward cases. 

But tracking down the videos is the tricky part. Both Brookins and Oyarzo were made aware of the deepfakes by their friends and followers. “I’m trying to figure out, ‘How can I find all the ones that are out there already?’” Oyarzo asks.

With the increasing prevalence of AI-generated content, creators are not entirely left to fend for themselves. On Tuesday, the Creators Guild of America launched a tool called Mosaic, a credentialing platform for creators built to address the rise in deepfake fraud targeting content creators. Verified creators are granted a unique ID number and a landing page, which serves as a public digital repository showcasing their work, like authenticated collaborations with brands. CGA founder Daniel Abas says that the non-profit will be partnering with deepfake detection company, Loti AI, to handle takedowns of unauthorized content targeting creators.

Vermillio, the Chicago-based AI licensing company, offers technology that scans the internet for any content that contains unauthorized intellectual property of their 1,300 clients — comprised of content creators, musicians, film and TV personalities, and other industry professionals — with an automated takedown process to remove them from the platforms. They issue thousands of takedowns each day for a single customer, with a 95-percent success rate.

Three years ago, Rachel Vrabec, CEO of digital privacy and security platform Kanary, began hearing from content creators dealing with impersonation accounts that would steal away their ad revenue. “This evolution [of] legitimate businesses abusing the name, image, likeness or the IP of a creator through a third party, like an AI platform… that’s definitely newer,” Vrabec says. “Even today, a lot of people still don’t think it’s going to happen to them,” she adds. 

Kanary helps a range of creators — from Twitch streamers to YouTubers — combat online threats and security issues. Vrabec says that even for creators with millions of followers, some management teams are “scoped narrowly to help them with brand deals,” and not situations like deepfake fraud. 

Trending Stories

Trump Tries to Spread Blame for Iran War

Originally reported by Rolling Stone