Tuesday, February 16, 2016

Speculations on AI and the Turing Test



In brief, I take no stand on whether what we call AI is possible in fact—i.e. whether a machine could ever be created that would have consciousness. The belief that it can (rather than the more reasonable hypothesis that it can) is circular and religious, based on acceptance of the (currently) unprovable hypothesis that human consciousness is already a type of machine consciousness.

The Turing Test is inadequate to prove machine consciousness. Any test based upon the imitation of signs can fail. As the woman or man can make you believe she is in love with you just to get your money, so a computer “consciousness” may well be able to process the signs of human consciousness so well that no human can distinguish this performance from actual consciousness, and yet the computer may yet not be conscious.

The computer would know if it was conscious, since knowing one is conscious and being conscious are the same thing. But its assurance to us that it is conscious wouldn’t be the same as being conscious. (And in any case computer consciousness would probably be a new kind of consciousness, and as such would fall under the problem of definition rather than fact.)

So what? In practical terms, it doesn’t make any difference whether a computer is conscious or whether it only seems to us to be conscious. If it looks like consciousness and acts like consciousness there’s no more harm in pretending that it is conscious than there is in pretending that you cat loves you (incidentally, your cat doesn’t give a shit about you).

True, if we ever get there, there will be all sorts of moral questions that will have to be answered. And it will cause us to rethink what it means for us to be conscious in ways that we now are not constrained to think. But we are so far from there that I feel no interest in addressing these speculative questions as though they were practical. They will compel humans to adjust their own notion of morality. Making those adjustments now, however, would be foolish, since the thing that would inspire that readjustment is a mere hypothesis. It is right now the job of fiction to lay the groundwork, not science, not theology, not psychology.

The believers in the Turing Test are making a religious argument. This is their proof that God does not exist, that humans are not special, that life itself is not special, that the brain is an organic computer. (We’ve seen this before.) But since the Turing Test won’t prove consciousness, and the proof of consciousness is not a scientific proof, i.e. not a matter for science, conclusions based on the test won’t be trustworthy and won’t affect science. Conclusions based on a hypothetical passage of the test one day in the future are meaningless today. (That doesn’t mean speculation is meaningless or that this isn’t the time for that speculation, which it absolutely is—in fiction, in I, Robot, and She, and Galatia 2.2., just as 1820 was the right time to speculate about the role of electromagnetism in bringing inert tissue to life, which gave us Frankenstein. That was great as fiction; it did not end up working as science.)  
AI may be achievable, and may be achieved some day, perhaps even soon. But "I can't tell if it's a computer behind the curtain or a person" won't confirm that the day has arrived.