Listen Live
97.9 The Box Featured Video
CLOSE

Microsoft programmed Tay, an artificial intelligence bot, as an innocent teenage girl to chat online with millennials—leaning how to use language along the way.

It took just one day after its launch for social media users to corrupt Tay, teaching her to spew hate speech. Consequently, the tech giant unplugged Tay and deleted dozens of her messages, the Huffington Post reports.

A Microsoft spokesman emailed this comment to the Post:

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Business Insider said Microsoft launched Tay earlier this week as a program that learns human conversation. But White supremacists and sexists quickly discovered an open door to teach it offensive language, which the program doesn’t comprehend.

In one infamous tweet, Business Insider said Tay used a racially derogatory description of President Barack Obama, accused former President George W. Bush of causing 9/11, and commented that Adolf Hitler “would have done a better job.” Tay’s tweet pointed to presidential candidate Donald Trump, “the only hope we’ve got.”

Critics say Microsoft should have anticipated Tay’s vulnerability and added filters to prevent what happened.

SOURCE: Huffington PostBusiness Insider | PHOTO CREDIT: Getty 

SEE ALSO:

Black Girls CODE Empowers Young Women Of Color To Fall In Love With Technology

Facebook CEO Blasts Employees For Crossing Out Black Lives Matter Signs

Microsoft Unplugs Chatbot After Racist, Sexist Rants  was originally published on newsone.com