“Hot Topics in Computing” series covers the spread of information and fake news

ks

This week CSAIL hosted the sixth forum in the “Hot Topics in Computing” speaker series, where computing experts hold discussions with community members on hot button tech topics.

For this week’s fireside chat, CSAIL director Daniela Rus spoke with University of Washington Professor Kate Starbird about her work on the spread of misinformation (often referred to as “fake news”).

Initially, Starbird set her sights on examining the “self-correcting crowd” phenomenon, the belief that people will naturally dispel misinformation through public discussion. Her efforts, however, were redirected by an observation of a nearly antithetical behavior: conspiracies surrounding the 2013 Boston Bombings, which ultimately set the stage for a new climate surrounding online discourse of disasters.

Over the next few years, she observed that something called “alternative narratives” were cropping up in tandem to national crises like mass shootings, and were connecting people over seemingly inaccurate information.

Using a combination of qualitative and social computational tools, Starbird began collecting data related to tweets with buzzwords like hoax or false flag that were often linked to external alternative news sites. With her team, she created network visualizations showing connected websites that were cited together in reference to specific disaster events.

“I categorize disinformation not as simply intentional misinformation, but a tool used to confuse and create muddled thinking across society,” says Starbird. “This so-called ‘alternative media ecosystem’ we’ve found ourselves in is promoting people to lose trust in information systems.”

Starbird further mentioned research projects related to fake news surrounding the #BlackLivesMatter movement, and also a humanitarian organization out of Syria called the White Helmets. The misinformation surrounding these groups, she found, was related largely to blaming unrelated actors for mishaps, enacting the worst caricatures of either side in order to undermine their legitimacy, or negative narratives for motives.

She explained that, in solving these problems, the solutions are not always technical, but that there needs to be platforms for evaluating what constitutes someone promoting misinformation as a good or “bad” actor, and what differentiates a person who sincerely disagrees on an issue from someone who is trying to maliciously manipulate discourse.

During the discussion and answer portion, topics from critical thinking, to better computational tools, and passive versus active content monitoring were all on the table.

Professor Rus started the chat with asking if a machine learning tactic could be applied to any relatively predictable content ecosystem.

“We can observe content sharing patterns over time, seeing for example the same sets of sites doing similar things across different conversations,” says Starbird. “Sites use complex remixing, like translating back and forth on Google Translate, to appear bigger and more diverse than they are. This could be financially motivated as well as politically, so we need to find ways to distinguish between the two motivations - and [between] malicious and non-malicious actors - to get a better handle on misinformation.”