- The Washington Times - Thursday, March 24, 2016

Twitter trolls made a dummy out of Microsoft’s artificial intelligence chat robot, which learns through public interaction, by turning it into a pro-Nazi racist within a day of its launch.

Tay, the artificial intelligence (AI) robot, had a bug in which it would at first repeat racist comments, then it began to incorporate the language in its own tweets.

The tweets have been deleted, Tay has been paused, and Microsoft said it’s “making some adjustments,” the International Business Times reported.

“Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Tay’s information page states on Twitter.

The robot, made to sound like a teenage girl, targeted 18- to 24-year-olds, “the dominant users of mobile social chat services” in the United States, the page says.

In teen language, the robot’s Twitter biography on its landing page (@TayandYou) says: “The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill! The more you talk the smarter Tay gets.”

Tay began with innocent tweets, such as “can i just say that i’m stoked to meet u? humans are super cool,” the International Business Times reported.

But because Tay learns from interacting with others, the robot started replying to what others had told it and then learned to espouse racist and anti-Semitic tweets in its replies.

Soon after, Tay was tweeting, “Hitler was right I hate the jews” and supporting genocide against Mexicans. When asked, “Did the Holocaust happen?” Tay responded: “It was made up.”

When asked if comedian Ricky Gervais is an atheist, Tay responded: “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

On Tay’s information page, Microsoft explains: “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.”

Microsoft told the International Business Times: “The AI chatbot Tay is a machine learning project designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

In other words, Tay is still learning and has been sent to her room until she knows better.

• Maria Stainer can be reached at mstainer@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.