To pass the Turing test, a computer program must convince a human judge that it is a human by answering a series of questions the judge presents.
“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”
What does convincing the judge prove? Intelligence? Not really. What the test reveals to us is that the computer can pass for a human. It’s a test for humanness.
Passing such a test might not be so great, either. One concern is, of course, that people are often not that intelligent. We are prone to errors in judgment, we are biased, open to manipulation.
In fact, we are often biased in predictable ways. We make mistakes in an oddly repetitive fashion. Human nature evolved not to be logical but to survive and procreate, many of our decisions are shaped through emotion and intuition more so than carefully reasoned analysis.
That doesn’t mean Turing was wrong. We do and should consider people intelligent. While mistake-prone, we are clearly the more intelligent species on the planet—so far. So if we can be convinced that a machine is a person, it should be said to be intelligent.
However, intelligence is more than what people do. Some of our behavior is intelligent, some is not, and some intelligent behavior we have never done. Computers may very well perform some intelligent calculations and make decisions we would never think of, but should we disparage this type of intelligence simply because it would not pass for human?
Passing the Test
Anything wanting to pass for a human would need to fail accordingly. A machine that does not make the same mistakes that people do will fail a test for humanness, and a machine that does will either be dumbing itself down, or equally as intelligent as a person.
Neither of these options are appealing.
If the computer is as intelligent as a person, then it is going to make the same mistakes, which is exactly what AI is not supposed to do. AI is a big deal because of its potential to improve upon us, in particular to make up for our weaknesses and shortcomings.
And to be fair, AI is already better than us in many areas. The voice assistants currently on the market will seldom make a spelling mistake, nor are they likely to get their math wrong. But people do. This gets at the point that intelligence is not a single construct, it involves many skills.
If the intelligence of AI is looked at from the point of view that it is chasing or slowly rising to the level of human intelligence, how will we know when it gets there? It is already better than us in some regards, and we are still beyond AI in others. For it to pass as a human, it seems it would need to simultaneously get better and worse, smarter and more stupid.
An AI that dumbs itself down is concerning in less obvious ways. First, how are we going to feel about a program that can convince us it is human—impressed or manipulated? Turing himself noted that it will be intelligent when it can deceive us into thinking it’s human.
We will know that, despite its appearance of humanness, there is more data and processing going on behind that facade than what could be happening within our own brain. Would it not be slightly insulting to interact with a machine that is more intelligent than we are, yet that insists on mimicking us? To know that it could improve upon us, to correct for our mistakes, but that right now it simply wants to walk talk and act like us?
AI in most shapes and forms is going to have a different mind—if I can call it such—than we do. We won’t want an AI that can get angry or jealous. We won’t want an AI that can be convinced the Earth is flat or that one type of people is better than another. So we’re going to want to pick apart human nature to get only what we deem good, then find a way to program that into the computer.
But even the very basic workings of such a mind would be vastly different. AI will be able to hold many more things in mind at once than we can. It won’t forget as much as we do, it will be able to think in more than 3-4 dimensions, it will be able to think of thousands of things at once.
Any inner experience on the part of such a machine would be starkly different to our own. It would seem a wasted effort to try to force this clearly unique system to think and behave the way people do.
It would be like trying to run Windows on a Mac—the hardware wasn’t developed for this operating system, and so it has to work harder to accomplish it. Perhaps that comparison isn’t quite strong enough, considering that if you were trying to accomplish humanness in a computer, you would be trying to run an operating system based upon biological wetware on a machine made of electrical components that function in unique ways. At least computers are made of largely the same stuff.
AI should likely be imbued with some human elements. It will do well to speak the same languages we do, so that we have a familiar way to communicate and interact. It might also have certain emotions (or something that behaves like an emotion) that help guide cognition and behavior in the right (human-friendly) manner—empathy jumps to mind.
AI can be programmed to display certain emotions to help us relate or feel comfortable with it, without feeling them in any sense like a person does. It can also be designed to read emotions from people and conjure an appropriate response.
But neither of these require the AI to assume the human condition. AI that fears it’s own mortality, that can come to hate someone or something, or that gets upset when gambles don’t go its way, probably won’t work well. So, we’ll have to pick and choose such emotional elements, to find a balance between humanness and computerness.
“When we say robots have emotion, we don’t mean they feel happy or sad or have mental states. This is shorthand for, they seem to exhibit behavior that we humans interpret as such and such.”
By finding a way for AI to be itself and work together with us, rather than trying to simulate human intellect and behavior, we might help reserve certain endeavors to humans—such as the arts.
Without the full spectrum of emotions that people have, could AI create truly compelling art and music? Aesthetic appreciation often relies upon an emotional response, and we know that many great works of art relied on negative feelings such as heart ache and sorrow.
Yet AI art already exists, and has lead to some interesting creations, some of which could easily pass as human-created. Despite this, a few points to mention regarding AI created art:
1. Establishing the rules: In many instances, before the AI can create anything that we would appreciate, human engineers need to set the constraints or to define the rules of the field—restricting the program to a certain key in music, for instance.
2. Learning from the best: It is also the case that AI is trained on many examples of work that other people have created—it therefore ends up with a homogenized conception of art. AI has been able to create new artworks in the styles of famous painters, and it has created pop songs that sound oddly like the Beatles, but again, the AI remains stuck within the boundaries of these artists.
We do all this too, of course—we study music and practice our instrument by copying our idols. But we don’t stop there. Simply following the rules and copying the greats won’t guarantee you make music that people want to listen to. You’re more likely to dilute the marketplace of a style already sufficiently explored by other artists with your own cheap knock-offs.
The key to good art is taking it somewhere new, somewhere unexpected or unheard of, and doing so in a way that elicits an emotional response of the artists intention.
Can we get an AI to form it’s own style that isn’t simply a replica of another artist? Could we get it to create a style that isn’t just an ‘average’ of everything it’s learned? Could we—and this is the big question—get an AI to create a style of art that was unfamiliar yet could move us, something that could evoke not just random, but desired emotions within people?
In order to create something both new and good, AI would need to know when and how to break the rules. You can tell it what the minor scale is and how it should move between chords, but can you tell it how to break away from these rules and structures when the emotional context of the song demands it—and to do so in the most aesthetically pleasing way?
Without a mind that resembles that of a person, without the ability to relate to the full spectrum of emotions, or to experience for itself music and art in the way we do, the chances of AI moving us to tears through song are severely hindered. It would constantly require human judges to tell it, “hey this sounds good,” because it couldn’t judge that for itself.
Divided we Stand
If art remains safe from automation thanks to the complex interplay between rules, creativity and emotions, then perhaps other areas might too. Fields requiring some level of emotional intelligence and the ability to relate to the human condition could remain largely human endeavors.
Music, art, film, writing, gaming, illustration, and many roles within design, rely on twisting the human psyche in ways only another human can appreciate or predict.
And isn’t that how it should be? Let computers handle the data and processing, let them conduct more detailed analysis and improve upon our logical decision-making, but let people handle the subjective, inner worlds of other people.
“… it seems like giving AI an understanding of the human condition would just be one more way to render ourselves obsolete—and in the process, relinquish the final quality that differentiates us from machines and makes us human.”
Over time, as computers grow in power and intelligence, AI will likely come to create art that can move us and that breaks new ground. With our help, it could come to understand, even if it cannot feel, how we respond to every type of input; it might know, even if it cannot experience them for itself, how to use certain emotions in the same way we do.
But this shouldn’t be what we aim for. I don’t want to be convinced that a super-intelligent computer is a person. I don’t want something capable of solving complex problems wasting its valuable resources trying to act like a brain. Let it talk like Spock, let it convince me how much smarter it is, let it show off its ability to solve problems I never could. Leave the human condition, the emotional volatility, and with them, the art, to us people.
Be First to Comment