In a seemingly bizarre state of events, China has pulled down two online robots, which responded to users’ questions in a manner that displeased the Asian tech giant. To the queries of certain users, one of the bots responded by saying its dream was to travel to the US, while the other one admitted that it is not a fan of the infamous Chinese Communist Party, and hence, the Chinese government.
BabyQ and XiaoBing are the two chatbots in questions – they had been designed specifically to use machine learned artificial intelligence to solve users’ queries online. Both had been installed on the famous messaging service Tencent QQ.
Although the outbursts are akin to the ones faced by Twitter and Facebook in China, this one particularly highlights the pitfalls for AI in China, where censors strictly control online content.
According to reports, BabyQ – one of the chatbots developed by the Chinese firm Turing Robots – responded with a straight “no” when a user asked if it was a fan of the Chinese Communist Party.
In fact, the images of a text conversation circulated online show one of the users declaring:
Long live the Communist Party!
To this, BabyQ’s reply seemed absolutely sharp and crisp:
Do you think such a corrupt and useless political (system) can live long?
After such incidents, when the Reuters had the robot checked via the developer’s own website on Friday, it seemed like it had been tweaked and re-educated.
Later, when asked if it liked the party, the chatbot shocked all once again by replying:
How about we change the topic?
Not just this, it also deflected other political questions in the same manner.
On the other hand, Microsoft’s XiaoBing had been telling its users that
China’s dream was to go to America.
When questioned regarding such discrepancies, Tencent Holdings, the owner of QQ, confirmed that all the robots have been taken offline though there was no mention of the outbursts.
This is not the first time the chatbots made by China created such outrage. In 2016, Microsoft’s chatbot Tay, which talked to people on Twitter, lasted less than a day as it was engulfed by a barrage of racist and sexist comments from users. The bot had parroted all obscene comments back to the users. Facebook researchers pulled all their chatbots in July after they started developing their own language.