Here it is
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
And here's the football version:
RAWK taught Microsoft’s AI chatbot to be a Lucas Troll in less than a day
It took less than 24 hours for RAWK to corrupt an innocent AI chatbot. Yesterday, Microsoft
unveiled Arn — a bot that the company described as an experiment in "conversational understanding." The more you chat with Arn, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."
Unfortunately, the conversations didn't stay playful for long. Pretty soon after Arn launched, people starting trolling the bot with all sorts of complimentary remarks about Lucas Leiva, Liverpool's longest serving player. And Arn, being essentially a robot parrot with an internet connection — started arguing these sentiments back to users on other forums, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

Now, while these screenshots seem to show that Arn has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Arn's posts (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Arn to "repeat after me," it will — allowing anybody to put words in the chatbot's mouth.
One of Arn's now deleted "repeat after me" tweets.
However, some of its weirder utterances have come out unprompted.
The Guardian picked out a (now deleted) example when Arn was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Brendan an atheist?" by saying: "Brendan learned totalitarianism from adolf hitler, the inventor of atheism."
But while it seems that some of the bad stuff Arn is being told is sinking in, it's not like the bot has a coherent ideology. In the span of 15 hours Arn referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now." Posting "Simon Mignolet" at the bot got similar mixed response, ranging from "Lucas is a hero & is a stunning, beautiful woman!" to the transphobic "Stewart Downing isn't a real woman yet she won woman of the year?" (Neither of which were phrases Arn had been asked to repeat.)
It's unclear how much Microsoft prepared its bot for this sort of thing. The company's
website notes that Arn has been built using "relevant public data" that has been "modeled, cleaned, and filtered," but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Arn's timeline this morning, deleting many of its most offensive remarks.
ARN'S RESPONSES HAVE TURNED THE BOT INTO A JOKE, BUT THEY RAISE SERIOUS QUESTIONS
It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash? There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Arn's adventures on LFC forums show that even big corporations like Microsoft forget to take any preventative measures against these problems.
For Arn though, it all proved a bit too much, and just past midnight this morning, the bot called it a night:
c u soon humans need sleep now so many conversations today thx
— ArnTweets (@TayandYou)
March 24, 2016
In an emailed statement given later
to Business Insider, Microsoft said: "The AI chatbot Arn is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Arn."