Friday, November 19, 2004

Generalized Turing test

I have a bet with one of my former PhD students regarding a strong version of the Turing test. Let me explain what I mean by "strong" version. Turing originally defined his test of artificial intelligence as follows: a tester communicates in some blind way (such as by typing on a terminal) with a second party; if the tester cannot tell whether the second party is a human or a computer, the computer will have passed the test and therefore exhibits AI. When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.

Now consider the moment when a machine passes the Turing test. We would replicate this machine many times through mass production, and set this AI army to solving the world's problems (and making even smarter versions of themselves). Of course, not having to sleep, they would make tremendous progress, leading eventually to a type of machine intelligence that would be incomprehensible to mere humans. In science fiction this eventuality is often referred to as the "singularity" in technological development - when the rate of progress becomes so rapid we humans can't follow it anymore.

Of course the catch is getting some machine to the threshold of passing the Turing test. My former student, using Moore's law as a guide (and the related exponential growth rates in bandwidth and storage capacity), is confident that 50 years will be enough time. Rough calculations suggest we aren't more than a few decades from reaching hardware capabilities matching those of the brain. Software optimization is of course another matter, and our views differ on how hard that part of the problem will be. (The few academic CS people who I have gotten to give their opinions on this seem to agree with me, although I have no substantial sampling.)

I'd be shocked if we get there within 50 years, although it certainly would be fun :-)

6 comments:

DrPat said...

Perhaps we miss an important point when we concentrate on the observer in the blind: if the machine is capable of "fooling" the observer, it is capable of deceit. Since surely the first indication of consciousness is the ability to project the thoughts of others by modeling them within oneself, the proof of consciousness is the intent to behave in such a way as to warp those perceptions, not just the ability.

In other words, to lie.

The designer of the AI machine has the intent. Even when that design passes a strong Turing test, there is a long way to go for the machine to achieve consciousness.

I blog books.

Steve Hsu said...

You make an interesting point regarding intention and consciousness. But I was searching for an operational test that didn't impinge on the issue of consciousness. In practice, once I've found that someone has invented a black box that can learn quantum mechanics and help me design faster CPUs, it is then only a matter of mass production to accelerate the rate of technological progress tremendously. The black box may not itself be "conscious" or have "intent", but if it passes the strong Turing test it can probably help me design ever more capable versions of itself. Eventually I won't be needed...

Anonymous said...

but what is the bet ?

Steve Hsu said...

We have bet an undisclosed sum of money on whether the strong version of the Turing test will be passed within 50 years. My former student feels he has left a large margin of error, since the Moore's law estimates suggest the hardware capability will be there within about 30 years.

Anonymous said...

Ray Kurzweil has been suggesting the Moore's Law horizon is much closer, on the order of about 15 years. Does your student have the calculation handy ? It would be interesting to see it.

What if the machines 'wake-up' and decide not to talk to us - how would we ever know ?

Steve Hsu said...

I think there is an issue of connectivity and not just CPU flops. In the brain the connectivity is very large - each "processor" is connected to of order 10^(3-4) others, if I recall correctly. This architecture is very different from that of current CPUs, so allocating an extra decade to achieve it (or emulate it with overwhelming speed advantage) seems reasonable to me.

Regarding your second question, I doubt they will "wake up" except under the guidance of teams of AI researchers :-)

Blog Archive

Labels