Monday, October 24, 2011

How do you know the author of these words has a mind?

To: Denis C.
From: Geoffrey Klempner
Subject: How do you know the author of these words has a mind?
Date: 1 September 2005 13:51

Dear Denis,

Before I launch into my comments on your second essay for Possible World Machine, you might enjoy the following link:

(Postmodern Essay Generator)

If you try the essay generator a few times, you will notice certain patterns re-appearing. Without too much difficulty, you could probably deduce the transformation rules which were used to generate the essay from a random selection of philosophical terms, grammatical constructions, philosophers names and names of other well known figures. (My 'essay' was a baffling but strangely intriguing little piece about Madonna's use of sexual symbolism.)

When I am writing letters to students, I am conscious of the temptation to repeat what I have said to another student about the same essay topic. I have learned to spot standard 'essay types'. However, I always try to look for something different in the essay I am commenting on, something different to say even when the essay reads much like a dozen essays I have read before.

In your case, surprisingly, although this has been quite a popular essay topic you are the first to consider the idea of essay-writing authorware. All the other answers I can recall have launched into the 'problem of other minds' without considering the particular implications of the way the question is worded. (Well done for that.)

Two preliminary points. There is a subtle (or not-so subtle) kink to the question, and also to your answer. What critics of postmodern philosophy like the physicist Alan Sokal are basically saying is that postmodern writer's *do not* have minds, at least, not when they are 'generating' their postmodern writings. They have merely learned how to throw words and phrases around, stick them together in a semblance of a text with meaning, but all they have succeeded in doing is fool themselves and their readers.

There is no mention in your essay of the Turing Test, which might be thought to be the starting point for an answer. The Turing Test essentially relies on interaction between the terminal producing words and the intelligent viewer. You do mention interaction in the context of what Sartre says about a literary work. However, this is not interaction in the sense that Turing proposed: it is not analogous to a process of formulating a hypothesis, designing an experiment (putting a question to the terminal) and testing the hypothesis against the result of the experiment.

So let us assume, as you have done, that the phrase 'these words' in the essay question refers to a substantial piece of writing, a few thousand rather than a few hundred words. In fact, however long the piece of writing, there is always the slight probability that it could have been produced completely randomly (as, it is said, a monkey hitting keys at random would eventually produce the works of Shakespeare). But for practical purposes we can discount that possibility.

The hypothesized 'authorware' does not simply generate words at random, but strings them together in apparently meaningful sentences - like the postmodern essay generator, but much more skilfully. And it does this reliably.

However, an equally important consideration is that the authorware is not an artificial intelligence. If it were, then by definition we would be dealing with a 'mind' (for the purposes of this question - as you have mainly interpreted it - we can discount the dualist worry that no merely physical entity can have a mind).

- I wonder if you can see where this is going?

The transformation rules used for the postmodern essay generator are relatively easy to spot. It might take a few sheets of A4 to write them all down. By contrast, the program for the philosophical AI runs to hundreds of thousands or possibly millions of pages.

In both cases, however, we are dealing with rules, a program. What is the *essential* difference? is there one?

On my Macintosh, I have a piece of software, Orville, based on the 'Eliza' program. Here is a 'dialogue' which I have just had with Orville:

'Good afternoon... please state your problem.'
'I'm writing a letter to Denis.'
'How long have you been writing a letter to Denis?'
'About half an hour.'
'What else?'
'I'm finding it difficult.'
'Did you come to me because you are finding it difficult?'
'How can I help?'
'Tell me what to write.'
'Have you asked such questions before?'
'I'm amazingly intuitive.'

The great thing about Orville is that in a folder all the transformation rules are written out. The text is not very long. If I had the time and energy, I could probably do a lot better. However, however hard I worked at polishing the script, things will quickly come unstuck with questions like, 'Tell me what to write.'

This illustrates the essential difference between an AI program - supposing that one could be written - and a set of linguistic transformation rules, however complex. The difference is one of level. So long as we are merely dealing with words and their combination and recombination, all that can ever be produced is mimicry. The rules for a genuine AI program would have to generate the very capacity for language. We can't simply take words ready made and invent some clever way of shuffling them around.

If that is the underlying structure that we are looking for, a subject with the capacity for language, how can we tell? My answer would be that it depends on whether we are dealing with a good piece of philosophical writing or not. In bad philosophical writing - and this is essentially Sokal's point - it does seem that you could have substituted any one of half a dozen different phrases and it wouldn't have made any difference, whereas in a good piece of philosophical writing, each sentence, each paragraph seems necessary in the light of what went before. That is something only a mind can do.

All the best,