Columns
Thinking about machines that think
A machine is not equipped to decide what is right and what is wrong.Sambridh Ghimire
When I think about machines that think, I think about software that thinks, instead of armoured hardware. I think more of Brainiac of Krypton and less of the T-800 from The Terminator. Due to pop culture, we are predisposed to visualising thinking machines as sentient military-assassins rather than a Go player. The Turing test is designed to find out if a machine can think, but the test is more of a question about a machine’s ability to talk like a human rather than think like a human. I’m more concerned about how they think, especially those already around us—our translators and our spellcheckers, our smartphones and our spam filtering bots. To narrow it down even further, rather than asking whether they can think, I like to ponder about how they think or will think.
My main area of concern is one where machines meet morality. Machines are designed to perform specific tasks. A PlayStation would be a poor replacement for a crane lift. The lift was designed with its job in mind. Machines perform a variety of tasks, things we wouldn’t have imagined a few years ago. We now see machines flagging inappropriate content because they can do so faster than humans. So much data is uploaded to the internet daily, it is impossible to have human oversight of the same, and here we need machines. But these machines are designed to target content that would inherently lead it to weigh the moral repercussions of such actions. While a computer can quickly identify your pets in our photos, they cannot, for instance, tell you if he is a good boy. I do not believe that a machine is equipped to decide what is right and what is wrong.
Complex thought
I cannot comprehend how such a complex thought can be reduced to algorithms, partly due to my limited knowledge of computer programming, and partly because humans cannot make that decision with 100 percent certainty. Things that were acceptable to say in the 1950s now provoke people to raise their pitchforks. Our morality shifts with time, but my concern is that machines are unable to cope with such a system. Even though we can build an algorithm that scans for expletives, we cannot equip it to keep shifting its moral goalpost along with us. Take the example of Microsoft’s Tay chatbot. It was taught to be a racist in less than 24 hours. This isn’t because the programmers wanted a Nazi chatbot, but because the chatbot couldn’t discern between what was appropriate to say and what wasn’t, between good knowledge and bad knowledge.
When I think of machines that think, I think about whether machines have a moral compass and how they would use it. A smart sprinkler can now detect if your lawn needs watering, activate itself till your lawn is hydrated, and switch itself off. But is it capable of knowing that it should use water sparingly during a drought? A machine for student’s attendance can record whether the biometrics were present at the time of class, but can it make an exception for a differently-abled student who happened to limp in 40 seconds late?
Moral decision
Machines perform various tasks, and some are bound to put it in situations where it has to make a moral decision. Morality, unlike humidity, is unquantifiable. After an earthquake, should it prioritise paramedics, a school with 200 kids or Parliament? A massacre of all humans with HIV would effectively eradicate the disease and be an effective yet appalling solution. How do we teach machines about human values if we cannot decide what these are? Early warning systems to identify at-risk youth are now in development. These try to collate data across social platforms to determine the chances of risk. This is most probably an attempt to facilitate pre-emptive action. But at what point does this system turn into a paternalistic machine running real-time analysis about how you think? One that socially engineers us according to the algorithms that engineers it.
It is Murphy’s law that what can happen will happen. Because there is scope for abuse, I can be reasonably confident that they will be abused in the near or distant future. Even mundane tasks require some sort of moral questioning. Is this email asking for donations for Black Lives Matter spam? What are the repercussions of a machine flagging it as inappropriate without any prior indication from the user? When an app is collating articles that might interest me, does it unwittingly make a judgement about the kind of information that I have access to? These are the things that I think about machines that think.