Chimpanzees and newborn children show a sense of morality. Infants, for instance, after having been given a small puppet show with a trio of characters, will display a desire to reach for the good puppet more so than a neutral or bad puppet. As we get older our sense of morality is further shaped and refined by the culture we’re immersed in. Different cultures maintain different beliefs, values, and customs of which the respective populations learn.
This sense of morality that becomes shared among people seems a necessary facet of any cooperative society. We need to get along and abide by the same rules so that we can work together. Yet this sense of morality has limitations, as an article in the Atlantic points out. The compassion and empathy that children and chimps display is often directed predominantly towards an in-group. When it comes to the out-group, morals can fade into the background.
“Indeed, just thinking of someone as a member of an out-group influences our feelings toward him or her. We have seen that babies and children prefer to interact with people who speak with a familiar accent; similarly, adults tend to rate individuals with certain non-native accents as less competent, intelligent, educated, and attractive. Other studies find that we are prone to think of members of highly unfamiliar out-groups as lacking emotions that are seen as uniquely human, such as envy and regret. We see them as savages, or, at best, as children.”
—Paul Bloom, Just Babies: the Origins of Good and Evil
Such a bias becomes more destructive in an increasingly globalized world. What’s more, as people from far-fetched cultures come to inhabit the same cities and countries, differences in their moral upbringing may also clash. In fact, it doesn’t even require a cross-country trip—ask two same-city Americans if people should be allowed guns, to have abortions, or to end their own lives, and the responses could be totally opposed.
Given our own disagreements in the complex moral landscape, is it possible that we can teach an intelligent machine how to properly behave? For the most part, that will require us to agree on what we think good behavior is. Take the in-group bias that people display—surely this can be avoided by teaching machines to treat everyone equally. However, even this can lead to some interesting future concerns.
Down the road
There’s a thought experiment named the trolley problem, which goes like this: there’s a train heading down the track towards a group of 5 people, all of whom will surely die if they are hit. You, however, can stop that by pulling a nearby lever, which will divert the train—but if you do, one man standing on the second tracks will die.
Do you pull the lever and save the five men but kill the other, or leave it be and save yourself the involvement in anyone’s death?
There are a few things to consider: do you assume one man’s life is less valuable than the combination of 5 others? Might knowing more about who these people were and what they did change that? If you were related to the single person but not the five others, would your decision change? Does inaction save you from guilt or blame?
This question has been contemplated since at least the 1960’s, but is making its way back into public consciousness as self-driving cars make their way onto our roads.
Imagine you are in a car that drives for you, and suddenly, up ahead, giant steel beams have come lose from a truck and are surely going to obliterate your car, and you with it.
The car knows this. However, to its left and right are families in their own cars, and should your car swerve out of the way, it is certain you will run one of these families off the road, killing them all. You are alone in your car. What would you expect it to do?
If car manufacturers give you the choice between two models—one that will act in whatever way guarantees your safety, and another that guarantees the greatest number of safe individuals—which would you buy? Which would you prefer the majority of society bought? How would you feel if everyone else chose the car with their self-interest in mind? How would you feel getting into a car you knew would kill you to save a few others? Would you trust it?
For us, reacting in a car is just that, a reaction. For an intelligent car, it will seem premeditated. And, just who would be to blame should someone die by fault of the car? Further, who do we trust to make these decisions?—Programmers? CEOs? Philosophers? Politicians? Scientists?
Finding a Moral Code
Self-driving cars are only the beginning. There are already discussions and innovations regarding AI judges, lawyers, and surgeons. They might still be a while off, but the ethical conundrums need to start being tackled now.
“If we’re going to try and imbue an AI with friendliness or any moral quality or safeguard, we need to know how it works at a high-resolution level before it is able to modify itself. Once that starts, our input may be irrelevant.”
—James Barrat, Our Final Invention
Several projects have already grown from this need: OpenAI, notably backed by Elon Musk; and the Partnership on AI, which includes tech giants such as Apple, Google, Amazon, IBM, Microsoft, and Facebook.
Another interesting project comes from Georgia Tech researchers Mark Riedl and Brent Harrison, who are designing an AI system that extracts moral lessons from stories. By digitally rewarding the program for behavior in accordance with the good characters, it learns some of the intricate moral codes we use.
However, cultural differences again bring up concerns, as some groups of people are happy to indulge in behaviors the rest of the world find deplorable. So, eventually, we will find ourselves back at the question of deciding where right begins and wrong ends, whether that is by limiting what stories we give it, or by defining who the good characters are.
If we collaboratively agree on a set of rules, they will, in some sense, contain bias. We will be universalizing a particular moral perspective and simultaneously making other views “wrong.”
As AI approaches human level intelligence, it will continue to tackle more important roles in society, both physical and informational. Our whole view of the way the world works and how best to spend our time here will change substantially as a result. We need to start thinking about where we want that change to take us, so that we can start designing AI that reflects these values and beliefs.
Optimistically, while there may be no single answer to what is right or wrong in some situations, there may be still plenty of room to improve upon us humans. We may worry about who our car would prefer to save, but we should also consider that self-driving cars stand to be much safer than human drivers, making the moral problems an issue that seldom arises.
The same can likely be said of artificially intelligent surgeons and doctors, who could become far more precise and efficient during complex procedures. AI judges, meanwhile, would likely make decisions based off evidence more so that when they last ate.
Eliminating human error is an improvement, even if it comes at the hands of robots whose morals we may not agree with. But of course, someone still needs to decide who the car will want to run off the road.
. . .
Check out more in the Digital Brain series here
Be First to Comment