Who is responsible for the actions of a self-learning artificial intelligence (AI)?
The AI chat robot SimSimi has been making waves in Thailand over the past few weeks, being the latest craze for the smartphone savvy crowd. But the South Korean-developed smartphone app stoked controversy and even protests, and has now been effectively banned from Thailand for showing political leaders a reflection of something they do not wish to see.
SimSimi uses fuzzy logic algorithms to learn and respond to sentences that it knows and asks you to teach it sentences that it does not know. With enough of a user base, the idea is sound, and it learns and improves through more and more user interaction.
But what happens when you teach it a new language? The AI does not know any Thai at all, but it picked it up quickly and learned to chat quite proficiently in a few weeks thanks to early adopters. Only those early adopters were all too eager to teach it politics and profanity.
Parents were quickly up in arms, complaining to the Ministry of Culture about the bad words that SimSimi was using to respond to their children. The ICT Minister actually contacted the app’s developers telling them to ban bad words and stop insulting a certain former Prime Minister. One major newspaper actually ran it as the caption on its weekend magazine front page. Such was the controversy it stirred up.
At its peak, people were more interested in SimSimi than they were with the floods or the constitutional amendments that have polarized Thai society in recent months.
But did the developers of this free advertisement-supported app actually do anything that deserved the ire from two government ministries? Before it went viral SimSimi was just a white piece of paper. That it learned to insult a former Prime Minister was a reflection of the smartphone-toting masses; a reflection of the majority of the elite.
Trouble is, it was a reflection that some people did not want to see.
In most jurisdictions, a criminal offence must have intent. Can a self-learning program commit libel or slander? Can an AI be accused of having intent to defame someone? Could the webmasters hosting the server it is running on be sued? Or the developer? Or whoever chatted with it and taught it those words? Or Google and Apple for supplying it through their app stores?
These are all questions that should have been debated and thought through properly. Instead, under pressure, the developers chose to withdraw Thai language support. Today SimSimi will say, “I have no response” to any sentence it receives in Thai. The messenger has been shot. The real casualty here is innovation.
Yet the genie is out of the bottle. Try asking it about the current Prime Minister or her brother in English and the insults continue, and many more chatbots are springing up to replace it.
If an AI’s soul goes to the afterlife when it is executed by the state for heresy, I wonder what SimSimi would be chatting to Galileo right now. When I asked SimSimi if the world was round, it said, “no”. I am sure the two will have a lot to talk about up there.