Advertisement

Study: 'Bots' primary source of misinformation on COVID-19 on Facebook

Facebook groups infiltrated by automated accounts are more likely to spread misinformation about the pandemic, a new study has found. File Photo by Kon Karampelas/Unsplash
Facebook groups infiltrated by automated accounts are more likely to spread misinformation about the pandemic, a new study has found. File Photo by Kon Karampelas/Unsplash

June 7 (UPI) -- Facebook groups infiltrated by automated, fake accounts are more than twice as likely to spread misinformation about the COVID-19 pandemic than others influenced by these "bots," a study published Monday by JAMA Internal Medicine found.

About one in five posts made to Facebook groups most impacted by fake, automated accounts claimed that face coverings "harmed the wearer" while linking to a study that showed the opposite to be the case, the data showed.

Advertisement

More than half of the posts to these groups shared conspiracy theories that called into question the accuracy or methods used in the trial, the researchers said.

Conversely, fewer than one in 10 posts made to Facebook groups least affected by automation claimed masks harmed the wearer and just over one in five shared conspiracy theories about the trial, according to the researchers.

RELATED Study: Many in U.S. overestimate ability to spot fake news

Just under three-fourths of the posts to these groups shared the trial without any false claims, they said.

The analysis focused only on Facebook groups and did not address the potential role of bots sharing misinformation about the pandemic on other platforms, such as Twitter or Tik-Tok, the researchers said.

Advertisement

"The ... pandemic has sparked what the World Health Organization has called an 'infodemic' of misinformation," study co-author John W. Ayers told UPI in an email.

RELATED Russia, growing domestic operations biggest misinformation threats on Facebook

However, "bots like those used by Russian agents during the 2016 American presidential election have been overlooked as a source of COVID-19 misinformation," said Ayers, an epidemiologist at San Diego State University.

WHO officials are particularly concerned about the spread of misinformation about the disease as efforts to vaccinate the public against the virus ramp up globally.

A study published in February by the journal Nature found that exposure to misinformation about the safety and effectiveness of the vaccines led to a 6% drop in those interested in getting inoculated in the United States and Britain.

RELATED Public trust in CDC drops across all demographics during pandemic

To address this issue, the WHO as well as the Centers for Disease Control and Prevention have taken steps to correct misconceptions about the virus and the vaccines spread via social media posts.

"A key driver of vaccination is public trust -- [and] trust must be earned," WHO director-general Tedros Adhanom Ghebreyesus said last week during a meeting with British officials.

"To succeed in vaccinating the whole world, governments will have to deploy a range of strategies and tailor them to each country," he said.

Advertisement

Bots are computer programs often controlled by single users designed to simulate a human activity online, according to researchers at the University of Southern California.

They are often used to spread misleading or potentially harmful information online, or to scam other, real users, the USC researchers said.

For this study, Ayers and his colleagues assessed the role of automated bots in the spread of misinformation on Facebook by tracking posts on the "DANMASK-19" trial over a five-day period in November.

The trial was selected because it was the "fifth-most shared research article of all time as of March 2021," according to Altmetric, the researchers said.

The findings demonstrated that "masks are an important public health measure to control the pandemic," Ayers and his colleagues said.

Between Nov. 18 and 22, more than 700 posts that provided direct links to DANMASK-19 were shared in 563 public Facebook groups, the data showed.

Of these, 279 posts, or 39%, were in Facebook groups most affected by automation, based on the fact that they hosted identical links five or more times and at least half of these links were posted within less than 10 seconds of each other, the researchers said.

Sixty-two posts, or 9%, were made in Facebook groups that were least affected by automation, they said.

Advertisement

The percentage of posts linking to DANMASK-19 that claimed that masks harmed the wearer was 2.3 times higher in Facebook groups most affected by automation compared with those least affected, the data showed.

There were 2.5 times as many posts making "conspiratorial claims" about the trial in Facebook groups most affected by automation than in those least affected, the researchers said.

"We find that bots, or large numbers of automated accounts controlled by single users, on Facebook spread malicious COVID-19 misinformation at far greater rates than ordinary users," Ayers said.

"If we want to correct the 'infodemic,' eliminating bots on social media is the necessary first step [and] unlike controversial strategies to censor actual people, silencing automated propaganda is something everyone can and should support," he said.

A year in pandemic: How COVID-19 changed the world

January 31, 2020
National Institutes of Health official Dr. Anthony Fauci (C) speaks about the coronavirus during a press briefing at the White House in Washington, D.C. Health and Human Services Secretary Alexander Azar (L) announced that the United States is declaring the virus a public health emergency and issued a federal quarantine order of 14 days for 195 Americans. Photo by Leigh Vogel/UPI | License Photo

Latest Headlines