The promise and perils of AI and social-messaging

CSAIL researcher Brad Hayes chats with Scientific American about how chatbots are used and misused.
CSAIL researcher Brad Hayes chats with Scientific American about how chatbots are used and misused.

Bookmark and Share

Reprinted from Scientific American:

Call your computer program a “bot” and people are going to make certain assumptions, many of them negative. Twitterbots have become notorious over the past few years for their propensity to remove the human element from the microblogging service—automatically generating posts, following users and retweeting messages. Microsoft’s Tay, touted as artificially intelligent, proved anything but last month after users turned it into a trash-talking chatbot, prompting the company to quickly take it offline. Over the past decade “bot” has also become synonymous with a zombie computer that hackers hijack and use to attack other computers.

So what to think of Facebook’s new plan to unleash its version of “chatbots” on its extremely popular Messenger service? Should the company be worried that its AI effort to better cash in on Messenger’s more than 900 million users worldwide could go awry?

Despite the reputation some bots have garnered, Facebook’s chatbots are not a big risk for the company, says Brad Hayes, a postdoctoral associate [at CSAIL] whose work focuses on human–robot teaming. As the creator ofDeepDrumpf, the infamous Twitterbot that produces fake Donald Trump tweets by emulating the Republican presidential candidate’s word choices and speech patterns, Hayes has plenty of experience tinkering with these programs on a big stage. The AI Twitter page @DeepDrumpf has more than 21,000 followers despite having posted only about 170 tweets. Hayes also has a Twitter bot for Democratic presidential hopeful Bernie Sanders known as @DeepLearnBern.

“If nothing else, it’s going to be a fantastic learning experience for Facebook,” he says of the company’s foray into chatbots. “If this kind of thing fails, they’re still going to get a lot out of it because very few people can do this at the level and scale they’re doing. And the fact that they’re trying to make money using this suggests they’ll put a lot more effort into it in terms of making sure it serves its intended purpose.”

Microsoft’s biggest problem with Tay was that it had no filter—the bot digested disparaging comments that degraded women and extolled Nazism, for just a couple of examples—and then regurgitated that content as offensive tweets. The company allowed the bot to accept whatever came in, and ended up having to publicly apologize. Tay learned bad things from bad information and responded to it—just as it was designed to. “This is a fairly important lesson that companies and their developers should take to heart,” Hayes says. “Given that data tends to be the most valuable asset for any kind of artificial intelligence–oriented endeavor, there’s a huge temptation to turn to the world at large to collect that data because it’s free and available in large quantities. The problem with Microsoft’s chatbot is that it wasn’t getting the information that they wanted and did nothing to try to figure that out.”