Thursday, August 25, 2011

Can non-human animals or machines be conscious?

To: Joanna C.
From: Geoffrey Klempner
Subject: Can non-human animals or machines be conscious?
Date: 20 July 2004 12:00

Dear Joanna,

Thank you for your email of 6 July, with your second essay for the Possible World Machine, in response to the question, 'Explore the issues surrounding the attribution of consciousness to machines and to non-human animals.'

Although you comment in your email, 'I have omitted from the essay the point of [the] Turing test, as I believe it to be irrelevant', much of your essay can be read as an illuminating commentary on the Turing Test.

Turing's bold hypothesis was that the question whether or not to attribute consciousness to a computer is not a question about what might or might not be going on 'inside'. It is purely a question about how the computer behaves - specifically, how it interacts with us.

You might ask, can't *anything* be conscious? Wittgenstein at one point in the Philosophical Investigations says 'Couldn't I imagine having frightful pains and turning into stone while they lasted? Well, how do I know, if I shut my eyes, whether I have not turned into stone?...' (para 283).

Turing's starting point is that we are not concerned with 'consciousness' as a weird kind of 'stuff' that could, unknown to us, be 'in' anything, including a stone. Here, as Wittgenstein adeptly shows, Turing is on firm philosophical ground. (For more argument, see my essay, 'Truth and Subjective Knowledge' on the Wood Paths website at http://klempner.freeshell.org/articles/shap.html .)

Next question: what kind of 'interacting with us' would suffice for consciousness? 'Suffice' might seem to be a problem here, because of the threat of scepticism. Suppose the makers of the Black Box simply put a pre-recorded set of instructions inside a machine, based on a complete set of predictions of everything that we would ever ask the machine? Let's assume then that we know enough about the inner workings of the machine in question to know that its utterances are *caused* (in a way yet to be determined) by our attempts at communication, i.e. it is not just reading from a script.

Turing's idea was that *all* that the machine needs is the ability to communicate (e.g. though a printout, or a screen). It doesn't need 'arms and legs'. The kind of interacting that suffices for consciousness, according to Turing, is simply being able to communicate, to carry out a conversation indefinitely, to listen, to respond appropriately, to reason and argue.

Now, as you point out, one can 'program' a machine to come out with stock phrases, like the toy action men which their push-button repertoire, 'Stand to attention!', 'Attack, attack!' etc. At a slightly greater level of sophistication, you can have a simple program which mimics conversation. In my computer, I have a program called 'Orville'. Here's a short conversation I have just had with Orville:

Orville: Hello

Me: Hi.

Orville: How do you do... please state your problem.

Me: I am writing an email to Joanna.

Orville: How long have you been writing an email to Joanna?

Me: About half an hour.

Orville: What else?

Me: I'm a bit stuck.

Orville: Did you come to me because you are a bit stuck?

Me: I did.

Orville: What is the real problem here?

Me: I'm not sure I understand the Turing Test.

Orville: Do you enjoy not being sure you understand the Turing Test?

Me: No, not at all.

Orville: You are being a bit negative.

Me: I can't help that.

Orville: What if you could help that?...

- At this point, one begins to get suspicious that one's questions are merely being parroted back!

In fact, you can 'open up' Orville's memory to see the transformation rules which enable it to mimic a human conversation. Turing's idea is precisely this: a genuine case of an intelligent/ conscious computer would not use mimicry rules. A couple more exchanges with Orville would make it plain that Orville is not responding in a way that an intelligent subject would respond. The responses cease to be 'appropriate' and become silly and repetitive.

So far, so good.

However, I don't believe that *any* computer program could achieve this, unless it was integrated with a physical body which the computer regarded as its own, including the capacity to act ('arms and legs') physical needs, etc. - There is some argument for this in unit 4.

Early on in your essay, you talk about thoughts and feelings as 'not generated from a stimulus or function of the brain'. The point which you seem to be trying to express is the difference between 'thinking' (in the broadest sense, which includes feeling) and 'response to stimulus'. As a student, I was once given the essay question, 'Do flowers feel?' Flowers respond to sunlight. They generate electricity when their petals or leaves are plucked (which can be played back as a 'scream' using suitable equipment). What is that not proof that flowers feel? Because conscious feeling is not simply response to stimulus.

This takes to the question of attributing consciousness to non-human animals. Animals don't just respond to stimulus. Their behaviour shows evidence of an inner life, in that they acquire habits, learn from past experience and so on. What animals don't have is what the Turing Test requires, viz the ability to communicate in language.

So here is the second big question for the Turing Test: what is so special about being able to use language? Why can't there be an 'intelligent', 'conscious' animal which just doesn't happen to be interested in communication? What is 'consciousness' anyway? Should we regard the 'pain' or 'suffering' of a non-human animal any differently because it has a different degree(?) or kind(?) of 'consciousness' to that possessed by human beings? What about human beings (infants, the mentally retarded, or advanced Alzheimer's patients) who do not have language? - I don't have definitive answers to these questions!

All in all, a good essay which raises lots of issues. Well done!

All the best,

Geoffrey