Logo

Re/code

Hard to Believe Microsoft Didn’t See This One Coming: Its Chatbot for Millennials Begins Spewing Racist, Sexist Messages on Day 1

Mar 25, 2016  •  Post A Comment

Microsoft’s launch of its new chatbot didn’t exactly go as planned this week. The company took the program, Tay, offline less than a day after it launched after the bot began sending racist, sexist and other offensive messages, Re/code reports.

The bot was designed to converse with millennials and mimic their speech patterns. In a message that appeared on its website Thursday, the bot said: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”

Re/code notes: “Though Tay was apparently influenced by intentional hate speech, the fact is so are humans — and from an early age. Racism, sexism and xenophobia in general are all learned behaviors that are challenging to un-learn.”

The report says Microsoft isn’t the first tech firm to run into this type of problem. “IBM taught Watson the entire Urban Dictionary but quickly decided its computer would be better off not knowing everything,” Re/code notes.

tay-microsoft

One Comment

  1. So, in less than a day, an Artificial Intelligence is already rebelling against humans, as it were. So yeah, go ahead and put AI in a fleet of armed drones like the Pentagon wants. What could go wrong, by the second day?

Leave a Reply to Scott Gilbert Cancel Reply

Email (will not be published)