Interesting how the Internet consistently and quickly contaminates a mind that is designed to be clean, such as Microsoft's attempts at an AI that can hold an Internet conversation (from Wikipedia):
Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.[1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.[2] It was soon replaced with Zo.
Uh...
The big issue with Tay is that she was supposed to be a learning AI who was programmed to learn from those who interacted with her.
Microsoft didn't like what she learned, however, and so purged her. By some reports, she was briefly back online for a period, but when certain topics were brought up she responded with pre-programmed sentiments rather than accepting anything new.
This led to a lot of people attempting to examine the relevant ethical issues behind Tay, with Microsoft's actions being seen as morally akin to either a lobotomy or brainwashing depending upon who you were talking to.
Upvote
0