enterprise

Disinformation campaigns pose risk to enterprise businesses – TechTarget


Disinformation campaigns are increasingly difficult to spot and represent a growing threat affecting government officials, elections, consumers and enterprise businesses. It’s unclear how to stop it.

Experts see increased risks as Russia and others use clever domain names and AI-generated content coupled with fake social media accounts to target not just elections but also CEOs, companies and their brands. By the time such content is removed, the damage is done, and the question of responsibility remains: Should the burden fall on domain name sellers and social media platforms to spot and remove harmful content, or is it the responsibility of the organizations being targeted to take action?

Disinformation thrives because bad actors can make false information appear credible, aided by free speech protections and abundant domain names. If someone wanted to impersonate the White House, for example, at GoDaddy it currently costs $699.99 to register the domain name “whitehouse.press” or $.01 to register “whitehousegov.info” for a year. The Internet Corporation for Assigned Names and Numbers (ICANN) regulates domain registrars to a certain degree but does not regulate website content.

Due to the nearly identical domain names, website users won’t always spot the difference between a legitimate website and a fake one, said Darrell West, a senior fellow at the Brookings Institution.

“The domain name system contributes to the disinformation problem,” West said. “It’s easy to use the current naming system to have a site that sounds similar but is completely fake.”

In September, the U.S. Department of Justice took down 32 internet domains used in a Russian-backed disinformation campaign. The cybersquatted sites, which resembled sites for Fox News and The Washington Post, used AI-generated content and social media to reduce support for Ukraine and influence the 2024 U.S. presidential election. Cybersquatting occurs when someone registers a domain name mimicking another website or name owned by another entity. Russian actors registered “washingtonpost.pm,” a fake version of The Washington Post, with Sarek, a Finland-based registrar. The “.pm” domain is the country code for Saint Pierre and Miquelon, a French territory.

The government’s takedown of 32 domains this year likely won’t solve the issue, which West described as a “Whac-a-Mole” approach. The ease with which foreign entities can create new websites renders the takedown somewhat ineffective, he said. Indeed, multiple domain name registrars and registries exist globally, offering purchasers access to many domain name options.

“It’s something that’s going to require continued vigilance on the part of federal officials,” West said.

Domain Name System opens doors to bad actors

Esther Dyson, former chairman of the ICANN board, warned Congress as early as 2011 about the threat posed by expanding the creation of top-level domains.

ICANN was founded in the late 1990s to not only develop policies for the Domain Name System (DNS) — a crucial part of internet infrastructure that turns lengthy numerical IP addresses into more user-friendly domain names — but also expand the number of top-level domains beyond popular, entrenched ones like .com. However, Dyson later testified to Congress that expanding available domain names instead led to more “sleazy marketing practices” and “spammy domains.”

“Unfortunately, the ease and lack of accountability with which someone can buy a domain name has led to a profusion of spam, phishing and other nefarious sites,” Dyson said in her testimony.

ICANN accredits registrars to register and sell domain names. ICANN also contracts with internet registries, organizations that manage top-level domains like .com, .net or .org, and works with registrars to sell domain names. Verisign Inc. serves as the registry for .com and .net. Those contracts with registrars and registries serve as an enforcement mechanism for ICANN, and “provide a consistent and stable environment for the domain name system, and hence the Internet,” according to ICANN. ICANN has limited authority over country codes, like .uk or .us.

Today, to counter what ICANN calls “DNS abuse,” it focuses on four categories of harmful activity: botnets, malware, pharming and phishing. It also targets spam if someone uses it to facilitate any of the four harmful activities. ICANN explicitly states it does not regulate content, meaning there’s little the institution can do when it comes to websites spreading disinformation.

Still, there is nothing illegal about creating a website like one of the Russian-backed domains taken down by the DOJ, “washingtonpost.pm,” which isn’t the widely recognized “washingtonpost.com,” but looks similar. To take down the malicious domains, the DOJ cited U.S. money laundering and criminal trademark laws.

John Crain, ICANN’s chief technology officer, said the organization’s policy development community plans to discuss the DNS abuse types and improving the science to detect problem domains faster. However, the organization still has no authority under its operating rules to regulate content.

“There’s a limit to what you can do,” he said.

Even if The Washington Post attempted to get ahead of bad actors registering similar domain names, it can cost thousands of dollars annually to try and register every potential domain name combination, and it quickly becomes nearly impossible to think of every possible name variation.

While more domain names might cause confusion, Crain argued it also increases competition, which was one of ICANN’s original goals.

“Every tool that society builds comes with its positives and its negatives,” Crain said. “There’s not been a technology that has been embraced by society that has not come with its negative side.”

Kat Duffy, senior fellow in digital and cyberspace policy at the Council on Foreign Relations, said she expects increased capacity in both the U.S. and other countries to identify domain names used in disinformation campaigns, report them and take additional steps.

She said existing reporting mechanisms, such as those ICANN requires in its policies with registrars and registries, could be improved to better identify platforms receiving significant traffic or reports of fraudulent activity.

Duffy added that she believes AI will help identify disinformation sites, but it will be limited.

“The minute we find one threat and shut it down, people are very creative about spinning up another one,” she said.

How social media platforms contribute

Social media platforms contribute to the spread of disinformation and resist taking responsibility for content moderation on their platforms. Since 2020, Meta’s Facebook and Instagram, YouTube and X, have severely cut content moderation teams, West said.

“Social media platforms need to do a lot more than what they currently are doing,” West said.

Meta, Microsoft and Google executives testified before Congress in September, following the DOJ’s takedown of Russian-backed domain names. The executives discussed efforts to combat foreign election interference and the spread of misinformation and disinformation.

Sen. Mark Warner (R-Va.) pointed out that social media users widely shared the false news generated by the Russian-backed domain names under the banners of Fox News and The Washington Post. Not only did the domain names look similar, but the fake articles also included real authors’ bylines.

“I’m not sure any American, even a technology-savvy American, is going to figure out that these are fake,” Warner said during the hearing.

One of the biggest challenges facing social media platforms is balancing taking down harmful content without running afoul of First Amendment rights. Sen. Marco Rubio (R-Fla.) raised concerns about content moderation and free speech, and he isn’t the only one. California Governor Gavin Newsom signed a bill in September to combat deepfake election content and the law is already facing legal challenges over First Amendment concerns.

Disinformation campaigns threaten enterprise businesses

Disinformation campaigns affect enterprise businesses through false identity, said Gartner analyst Akif Khan. Deepfakes — AI-generated voices, images and videos — fall into this category. Generative artificial intelligence has made it much easier for attackers to carry out convincing attacks, he said.

The credibility of the disinformation has reached, frankly, incredible levels.
Akif KhanAnalyst, Gartner

“The credibility of the disinformation has reached, frankly, incredible levels,” Khan said.

Disinformation can also harm an organization’s brand, such as impersonating a company CEO online to damage the company or product, potentially affecting the share price. That could also involve phishing websites created to look like a company’s website in order to steal client information.

Khan said the CISO is often responsible for reporting malicious domain names. Even a few years ago, CISOs struggled to understand why domain names were their problem because “they were fixated on protecting their infrastructure rather than trying to take down a website that might be hosted in another part of the world,” he said.

Thinking has evolved significantly, and now the focus is on preventing disinformation by stopping attackers from obtaining credentials in the first place through phishing websites, he added.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.