The Case of Tay the Twitter Bot

Tay The Twitter Bot

11th OCT 2018

What can a newly created-- unbiased AI chatbot learn from the internet in just 16 hours?

It can learn to be a sexist, racist, and egocentric narcissist.  

On March 23, 2016, Tay the Twitter bot rattled the internet sphere when she was introduced to the world early that morning by Microsoft.

With the acronyms standing for “Thinking about you,” Microsoft launched Tay in efforts of building an AI bot that could “understand” and interact using common lingo to participate in modern day communication.

They claimed that the more Tay was exposed to, the more she would learn and better adapt to human interaction.

And this turned out to be true!

However-- Trolls immediately began abusing her way of learning and flooded her with distasteful comments to normalize in her system the offensive nature of their interactions. It was a serious “monkey see-- monkey do” situation that spiraled out of control due to the lack of preparation on who was going to interact with the chatbot.

In her 16 hours of exposure, she managed to tweet 96,000 times-- including comments offensive to women, members of the LGBQT community, Hispanics and many more.

Tay Tweet
One of Tay the Twitter bot´s tweets gone wrong

Many looked at the incident as one so bizarre, that it was considered humorous. Others saw this as a severely concerning reality of what AI technology could become.

Tay the infamous Twitter bot did spiral out of control, but it was a problem that could have been prevented, or at least to some extent.

One of Microsoft’s greatest critiques was that it allowed Tay to be used as a repeating machine. If a Twitter user Tweeted “Tay, repeat after me” before posting Tay would Tweet whatever they did, and as a consequence normalize the content she was exposed to.

Tay Tweet
Another unfortunate Tay tweet

It’s important to point out that by “learn” we mean she learned the words she was saying. She did not know the cultural or emotional meaning behind them. Microsoft failed to set strict filters that would prevent Tay’s algorithm from accepting and later tweeting out this offensive content.

Another reason Tay the Twitter bot failed was because of the platform it was released on.

For years Twitter has faced issues with “Twitter Trolls” because of their company’s dedication to offer users anonymity and a platform for complete freedom of speech.

Allowing an ai system bot to learn everything it says and comprehends from an unrestricted social form was just a recipe for disaster.

Chatbots are not difficult to train. Setting filters are necessary and easy to implement. With thorough planning and these safeguards in place, new chatbots have proved to be practical and useful way for people to interact with businesses and brands.

Tay the Twitter Bot is a special case that rattled the world, but was needed to show the negative extreme to which AI can head to and what we need to do to make sure it never gets to this point again.

Try Hubtype today for free!

CREATE A CHATBOT NOW
⬅︎ Back to resources