Sunday, November 26, 2006

Where the magic happens

John Searle (from http://sss.stanford.edu/others/johnsearle/):

'Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.


Searle is always good for a laugh. He's probably most famous for his Chinese room thought experiment. I'll leave it to you to follow that link if you're not familiar with it; my point here is that Searle's arguments usually follow a simple rubric:

  1. Assume that there is something special about the brain (preferably something that computer's don't have, in preparation for step 3)
  2. Assume that you need that special something to have intelligence
  3. Point out that computers don't have the special thing from step 1
  4. Season with a false analogy, blend for five minutes
  5. Profit

See, it's easy! Now you can be a philosopher, too!

Seriously, though. In this case, since this quote is from The Singularity Summit this past May at Stanford, he tried to throw us off his track by starting with an admission that machines can think. His argument here appears to be that since machines are physical, they can think, whereas programs are unphysical so they can't. An easy argument to make, as long as you accept the proposition that thinking is a physical act.

What is quite interesting about this particular Searle quote is this bit at the end:
...where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.

How very clever, to use the word "dualist" to refer to exactly the opposite of the type of people that it was originally intended for!

Here's a hint, John: there's no "deep and abiding dualism" at work here. We speak of strong AI, and we don't care one bit if there is "mind" involved. I wouldn't deign to try to define "mind" even for a moment - it's like all that other garbage you guys love to debate, irrelevant, redefinable, and unobservable. The reason I believe in the so-called "miracle" is that to me, strong AI does not involve creating a "mind," it involves creating a computer program that has a certain type of behaviour. If it's a thousand Chinese men in a room passing papers back and forth, that's more than good enough for me. Good luck building it: if you'd like to debate how feasible that type of scenario is, I'll happily play that game. But I'm sorry - just as the existence of God will never be settled with a slick thought experiment, neither will the potential existence of AI.

As for the milk and sugar example, I'd agree that there's no way to make milk and sugar by running a simulation. The desired output in that case is a physical object. Searle seems to assume that the desired output of a strong AI would also necessarily be a physical object...why, exactly? I don't know. Apparently it's obvious to him that intentionality is a "biological phenomenon." Why? See step 1) in my "How to argue like Searle" list above...

For me, if a program can pass the Turing test, that settles the question for me. We're not arguing here over whether computers can think, feel, love, or appreciate poetry; we're arguing over whether they can fake it. After all, if you can fake it well enough to fool everyone (including yourself), aren't you no longer faking? How do you know that you're not just "faking" intelligent thought right now?

So back to the topic of this post, where does the magic happen? Searle's answer: it's in the physics. My answer: who the f--- ordered magic, anyways? All I wanted was to chat with my computer!