Earlier this year, 20-year-old Ukrainian influencer Olga Loiek came face to face with an army of artificial intelligence-generated clones of herself online — and they were all speaking Chinese.
“I was really creeped out. I saw my face saying something in a different language. Then when I went and translated it, this was all stuff I would never say in my life,” she told the Financial Times, adding that the deepfake character spoke about “strengthening the relationship between Russia and China”.
Loiek, who was alerted to the videos by some of her 20,000 followers on YouTube and Instagram, said she is unaware of the motive behind the cloning but suspects her likeness could have been used as part of a disinformation campaign. The videos spread across Chinese social media sites Xiaohongshu, which translates to “little red book”, and Bilibili, with one fake account attracting 300,000 followers.
“It sounded like propaganda, trying to make people believe that Russia is a great country, [when] the war in Ukraine is the worst thing that has happened in my life,” she added.
Loiek’s case is an example of how generative AI technology is being abused by nefarious actors, who steal the visual identity of unsuspecting victims to create deepfakes. These synthetic doppelgängers are being used for disinformation campaigns but also to promote products on social media or in non-consensual deepfake pornography.
Rapid advances in generative AI have expanded the market for free and easy-to-use tools to create synthetic videos and imagery. Deepfakes of celebrities including Taylor Swift, Selena Gomez and Elon Musk have been created to promote everything from kitchenware to crypto scams.
But it is not just megastars: lesser-known content creators, and women in particular, have also become targets for this new form of online abuse.
Influencers who regularly post videos and images of their faces on the internet are particularly vulnerable, as the more content of an individual there is to train the AI, the more realistic the copycat will be.
Without easy legal recourse, start-ups have appeared, attempting to use AI to tackle the issue.
One such company, Ceartas, was set up after its founder, Dan Purcell, struggled to find ways to remove intimate images of himself that had leaked online. Initially, the business focused on helping adult creators on OnlyFans ensure their content was not copied to other platforms.
“About a year and a half ago, we started seeing deepfakes,” Purcell said.
He said Ceartas — which means justice in Gaelic — can identify a deepfake “in milliseconds”. It then takes steps to remove it from Google’s search engine and any other hosting platforms by assessing what terms of service or laws, such as copyright, it might infringe.
Another company, Alecto, is developing a similar product where users can upload scans of their faces to the platform, which will then use AI to search for intimate images, including deepfake versions.
Tracking down these videos can be difficult. Influencers such as Loiek and Michel Janse, another target of deepfakes, have relied on their followers to flag content for them. Janse’s likeness was used in an ad on YouTube to promote erectile dysfunction supplements.
“I honestly don’t know how we can be more discerning as a society to tell what is real or what is fake,” she said, alerting her followers to the deepfakes on TikTok. Her management reported the advert to YouTube, which removed it.
YouTube said protecting users is a “top priority” and it is investing heavily in its “ability to detect and remove deepfake scam ads and the bad actors behind them, as we did in this case”.
AI-generated content is allowed on TikTok, Instagram, X and YouTube, but they largely ban deliberately deceptive manipulated media — as well as copyright breaches, scams and non-consensual nudity, regardless of their use of AI.
Most AI generator apps also have guidelines around the type of content they can be used to create. However, it is unclear how the start-ups behind can enforce this.
Tools from Los Angeles-based HeyGen were used to create one of Loiek’s deepfakes and its technology is also used in other products through a software plug-in, making it difficult to police every use. HeyGen did not respond to a request for comment.
However, the ambition of AI avatar companies such as HeyGen and rivals including Synthesia and Soul Machines is to create cost-effective AI tools for legitimate commercial use, including making advertising content for social media.
Last month, New York-based entrepreneur Becky Litvin experimented with AI to create a deepfake influencer post promoting her wet wipe brand, Get Dirty. She posted the video on X, labelled as AI, to gauge the response.
It’s terrible but I still think it’s wild that this can all be done with AI. Imagine in 6 months… pic.twitter.com/L1ycB3IkAk
— Beck 🎈 (@beckylitv) March 25, 2024
Litvin used the website Arcads — which uses HeyGen’s technology — to create a naturalistic video of a dark-haired woman with a nose ring, evangelising her “hygiene hack” for getting rid of body odour. To make it look authentic, Litvin scripted the video with TikTok-style language such as “stank”, as the AI clone spoke into her camera from a car seat.
She paid just $100 to access the tool, choosing from various “AI models” to present the content.
Ariel Marie was the real woman depicted by the deepfake. She consented for Arcads to use her content to train AI, through a website called Fiverr, which pays content creators to make bespoke videos.
Marie said it was “a really odd feeling” seeing her likeness in the video and realising it was not her own voice, but that she was “grateful” for the opportunities the viral moment has now given her.
Arcads said it engages actors and creators under clear terms, with consent, using a formal contract for IP, and offers a one-time payment as compensation. It also uses moderation technology to filter potentially harmful content creation requests and restricts users’ accounts who break its rules.
Litvin said her intention was not to “trick” anyone but to raise awareness of what AI can do. She believes that within a year, many brands will be using AI to generate cheap ads for social media. But she warned that if they do, “it is lying [to the consumer] because that person that looks like a person [who] did not actually try the product”.
“Consumers and audiences can usually pick up on what’s real and what’s not,” Litvin said. “This one was a little bit too real.”