Advertisement

Foreign misinformation campaigns can be tracked in real time, research shows

Experiments show that if programmers select for the right features from online posts, an algorithm can successfully distinguish between authentic content and content produced by trolls. Photo by Pixelkult/Pixabay
Experiments show that if programmers select for the right features from online posts, an algorithm can successfully distinguish between authentic content and content produced by trolls. Photo by Pixelkult/Pixabay

July 22 (UPI) -- How can governments, online platforms and internet users curb the influence of foreign misinformation campaigns?

Research published Wednesday in the journal Science Advances suggests it is possible to identify bad actors, or trolls, in real time using machine learning algorithms.

Advertisement

According to researcher Jacob Shapiro, a professor of politics and international affairs at Princeton University, misinformation campaigns can reveal themselves in two main ways.

"To have influence, coordinated operations need to say something new, or they need to say a lot of something that users are already saying," Shapiro told UPI in an email. "You can find the first because it's unusual content by definition."

Finding the second is harder, but Shapiro and his colleagues thought they could design and train a computer learning algorithm to catch trolls.

"When influence campaigns try to shift a conversation with large amounts of content, they rely on relatively low-skilled workers producing a lot of posts," Shapiro said. "Workers are not natives of the influence targets and need to be trained on what 'normal' looks like. Moreover, their managers need standards to assess performance."

These two realities yield patterns that can be identified by algorithms.

Advertisement

Researchers used past misinformation campaigns from China, Russia and Venezuela to train their troll-finding algorithm. Once built, Shapiro and his colleagues put the algorithm to the test by presenting it with new content produced by both trolls and normal users.

The experiments showed that if programmers select for the right features from online posts, the algorithm can successfully distinguish between authentic content and content produced by trolls.

"What we found is that, historically, an out-of-the-box random forest algorithm with our features does pretty well at picking out Chinese, Russian and Venezuelan trolls across several different prediction tasks," Shapiro said.

For example, the algorithm can scan last month's content on a given platform and use what it learns to identify this month's trolls.

According to the new study, there is no one variable that gives a troll away.

After all, social media platforms are highly dynamic mediums, where users are constantly changing how they engage. As a result, Shapiro said trolls have to adapt their content production, too.

Thanks to the machine learning capabilities of the new algorithm, this complexity didn't prevent researchers from sussing out trolls.

"What our research shows is that in any given period for any given campaign, a large share of the troll activity looked different from normal users in discernible ways," Shapiro said.

Advertisement

Researchers suggest the algorithm, once its machine learning prowess is improved, could be adopted and deployed by both online platforms and governments.

As with any probabilistic model, it's likely the algorithm would make mistakes when distinguishing between genuine users and trolls.

"That's why one should never use this kind of tool to make attribution of specific accounts," Shapiro said.

Instead, Shapiro sees the technology being used to help governments and online platforms anticipate the topics, scope and effects of a foreign influence campaign. The technology could also help moderators queue up content and accounts for more careful scrutiny.

Latest Headlines