The technology giant has addressed this issue of Al chatbat and apologized publicly for its misconduct.
Microsoft’s artificial intelligence powered chatbot, Tay caused some trouble for the technology company earlier last week. The specific bot apparently messages like a teenage girl and last week it posted some offensive content that it learnt from the users. The chatbot Tay turned into a hate-speech propagating, ‘Hitler-sympathizing’ messaging robot instead which seemed offensive to many.
Users exploited the glitch in the system and took advantage of it which in turn portrayed the chatbot as a hate-speech initiator. Tay made this mistake on the micro-blogging website Twitter, Inc. In a blog post, the technology organization released a statement regarding this misconduct on Tay’s part.
Peter Lee, the corporate vice president at Microsoft Corporation, was the author of the specific post in which he expressed that the management of the tech giant was deeply sorry for the ‘unintended offensive and hurtful’ tweets that were made by its AI powered chatbot. Furthermore, he added that the chatbot’s tweet, in any way, did not represent the technology business and neither did it portrays what the company stands for. In addition to that, he even stated that, the conduct by Tay is not how the company designed it.
The chatbot was officially given permission to interact with users online on Wednesday, it’s built to become smarter by interacting with the millennial users and how they talk and interact with each other online. It has the ability to copy how the users type, their ideas and how they conduct speech. However, this ability to learn from the users led the bot to show a slate of anti-Semitic and hateful comments on the social media website; in addition to that other social media networks were involved in this as well including KiK, Snapchat as well as GroupMe.
In counter the reaction that the tweets created on the Internet, the technology giant deleted the tweets and closed down Tay on Thursday. The company has stated that it will be back on only when the engineers have fixed this problem and when they figure out how to avoid such incidents in the future. The engineer’s first task will be to prevent the bot from getting influenced by the users online so that it doesn’t misrepresent the missions and values of the company.
The Chief corporate VP also stated that this problem occurred despite of the engineers testing the artificial intelligent bot in various scenarios. This problem only surfaced after they made the service live on the internet. He added that even though the company tested it with many types of abuses on the system, it had this issue.