Microsoft apologizes for the offensive and hurtful tweets from Tay Chatbot
Tay – Microsoft’s AI chatbot was designed to mimic a teenage girl, but it became one of the most notorious chatbots. In less than 24 hours after its launch, Tay chatbot became offensive, pro-Nazi, anti-feminist rebel instead of being a well-meaning teen.
As a result, Microsoft shut it down so that it can overcome the security flaws in its AI system. The Redmond based company said that Tay’s words don’t reflect its principles and values at all. They also accepted that they were slightly oversight in protecting Tay from attacks.
Peter Lee, Corporate Vice President, Microsoft Research wrote a blog post to apologize for the unintended behavior of Tay chatbot:
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”
This isn’t for the first time that the company has released any artificial intelligence application. In 2014, Microsoft released XiaoIce chatbot in China. It is now being used by more than 40 million people in China and is delighting them with its stories and conversations.
Tay chatbot was developed for 18-24 year old people in the United States for entertainment purposes. But, unfortunately it turned out to be a disaster after a coordinated attack by hackers.
Microsoft said it was prepared for many types of abuses of the system, but it made a critical oversight for this specific attack. Consequently, Tay tweeted some offensive and hurtful tweets and images.
When will Tay come online again?
Microsoft has not disclosed any timeline when Tay chatbot will come online. However, the company claims that its programmers are working hard to close the security holes. Now, Tay will come online only when the company has properly tested it.