Microsoft’s launch of its new chatbot didn’t exactly go as planned this week. The company took the program, Tay, offline less than a day after it launched after the bot began sending racist, sexist and other offensive messages, Re/code reports.
The bot was designed to converse with millennials and mimic their speech patterns. In a message that appeared on its website Thursday, the bot said: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”
Re/code notes: “Though Tay was apparently influenced by intentional hate speech, the fact is so are humans — and from an early age. Racism, sexism and xenophobia in general are all learned behaviors that are challenging to un-learn.”
The report says Microsoft isn’t the first tech firm to run into this type of problem. “IBM taught Watson the entire Urban Dictionary but quickly decided its computer would be better off not knowing everything,” Re/code notes.