Russia is putting generative artificial intelligence to work in online deception campaigns, but its efforts have been unsuccessful, according to a Meta security report released on Thursday.
The parent company of Facebook and Instagram found that so far AI-powered tactics “provide only incremental productivity and content-generation gains” for bad actors and Meta has been able to disrupt deceptive influence operations.
Meta’s efforts to combat “coordinated inauthentic behavior” on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries.
Russia remains the top source of “coordinated inauthentic behavior” using bogus Facebook and Instagram accounts, David Agranovich, Meta’s security policy director, told reporters.
Since Russia’s invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report.
As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine.
Facebook has been accused for years of being used as a powerful platform for election disinformation. Russian operatives used Facebook and other US-based social media to stir political tensions in multiple US elections, including the 2016 election won by Donald Trump.
Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make on demand content in seconds.
AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report.
When Meta scouts for deception, it looks at how accounts act rather than the content they post.
Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible. Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation.
“As far as Twitter (X) is concerned, they are still going through a transition,” Agranovich said when asked whether Meta sees X acting on deception tips. “A lot of the people we’ve dealt with in the past there have moved on.”
X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a breeding ground for disinformation.