The four factors that fuel disinformation among Facebook ads

Facebook phone app

     

Before and after the 2016 U.S. presidential election, Russia’s Internet Research Agency purchased tens of thousands of Facebook ads in an effort to stoke division among Americans. Their effect on the election is hard to quantify. But their reach is undeniable.

“Tens of millions of people were exposed to these ads. So we wanted to understand what made these disinformation ads engaging and what made people click and share them,” said Juliana Fernandes, a University of Florida advertising researcher. “With that knowledge, we can teach people to pinpoint this kind of disinformation to not fall prey to it.”

With these disinformation campaigns ongoing, that kind of education is vital, Fernandes says. Russia continued its programs to mislead Americans around the COVID-19 pandemic and 2020 presidential election. And their efforts are simply the best known – many other misleading ad campaigns are likely flying under the radar all the time.

The most-clicked ads had a clear recipe made up of four ingredients. They were short, used familiar and informal language, and had big ad buys keeping them up for long enough to reach more people. In a bit of a surprise, the most engaging ads were also full of positive feelings, encouraging people to feel good about their own groups rather than bad about other people.

“It’s a little bit counterintuitive, because there’s a lot of research out there that people pay much more attention to negative information. But that was not the case with these ads,” Fernandes said.

These are the findings from research conducted by Fernandes and her UF colleagues analyzing thousands of deceptive Russian Facebook ads. Fernandes, an assistant professor of advertising in the College of Journalism and Communications, collaborated with researchers in the Herbert Wertheim College of Engineering and the College of Education to publish their results Feb. 21 in the Journal of Interactive Advertising.

Their dataset came courtesy of the U.S. House of Representatives Permanent Select Committee on Intelligence, which investigated the Internet Research Agency’s campaigns around the 2016 election. That trove of data provided a detailed look at the levels of engagement these ads spurred that is normally hidden from public view.

In all, the UF researchers analyzed more than 3,200 ads, a sample of the 80,000-plus ads reviewed by the House committee. Using machine-learning techniques, the team determined the linguistic characteristics and emotional mood of the ads, which they paired with data on how much money was put behind each ad, its length and how many clicks it received.

Most prior research on disinformation campaigns has focused on organic posts or untruthful media outlets, not ads. Today, these kinds of ads are easily purchased and targeted to specific groups of people, opening up new avenues for deception, says Fernandes.

“Anyone can buy an ad. I could go in and buy an ad and start spreading disinformation. We need to understand how these misleading ads spread,” Fernandes said.

The identification and regulation of these kinds of misleading or divisive ads is in flux. A lawsuit before the Supreme Court is outlining what responsibility Facebook and other social media companies have for content on their sites, potentially including advertisements. Facebook has implemented some moderation of misleading content, but the company has mostly focused on posts by users rather than ads.

In the meantime, Fernandes says, individuals have to protect themselves by applying a critical eye to what gets pushed into their social feeds.

“Sometimes I go on my Facebook feed and I see a sponsored ad and I wonder, ‘Why is this being shown to me right now?’” Fernandes said. “We need to educate people to ask these kinds of questions, to look at information and analyze: ‘Where is this coming from? Is it true?’ I think it’s a matter of teaching people to spot these signs that they’re being misled.”

Eric Hamilton March 8, 2023