Monday, June 20, 2011

'A computer can think and act but it cannot WILL'

To: Natasha G.
From: Geoffrey Klempner
Subject: 'A computer can think and act but it cannot WILL'
Date: 28 February 2003 12:41

Dear Natasha,

Thank you for your e-mail of 22 February, with your 3rd essay for the Philosophy of Mind program, in response to the question: "'A computer can think and act but it cannot WILL.' - Is that a convincing argument against a materialist view of the nature of the self?"

Let me first work on your intuitions by describing the following imaginary scenario of a chess game against a computer.

The computer responded to my opening move, pawn to King 4 with the same move. Over the next few moves, the computer tried to win control of the centre. As the positions equalized, the computer turned its attention to my King's side, where a bad pawn formation made my King vulnerable to a mating attack. In fending off the attack, however, I weakened my Queen's side. Now the computer relentlessly moved its Queens' side pawns forward. In the ensuing melee, one of the computer's pawns broke free and was two squares away from becoming a Queen. When I realized that I couldn't stop the pawn, I resigned.

Just as in your story of Fred, the objectives of the computer changed in response to changing circumstances. We can say that the computer has the ultimate goal of winning. It wouldn't understand deliberately losing a game for some other objective. Fred's ultimate goal is 'living the good life'. It is arguable, in a similar way, that we cannot understand what it would be to give up the goal of 'living the good life' for some other objective. Even acts of heroic self-sacrifice can be in terms of the agent's belief in what 'the good life' consists in.

If we are looking for a difference between human beings and computers, we have to look more closely. One, very important point that you make is that human beings evaluate their own actions and the actions of others in terms of 'good' and 'bad'. Above, I talked of 'the good life'. A precise definition can be given of what constitutes checkmate. But there is no precise definition of the good life, because there is an essential evaluative element.

I would argue that the notion of 'responsibility' is essentially linked to the possibility of 'making a response', of justifying our actions to another person, when challenged to do so. Human beings are responsible in this sense, whereas the chess computer is not. Once again, however, what is crucial is the evaluative element. Say, the computer makes a move that I cannot see the point of. I click a button on the screen which shows me the computer's analysis of the position. Looking at the analysis, I can now see how in four moves time against the best possible defence the computer will win one of my pawns. But this is just like looking at the printout from a calculating machine. The computer is not arguing with me or justifying its actions or explaining why it made the move that it did.

What exactly does that show? Here are some alternatives:

A. "The brain of a human being cannot function the way a present day computer functions."

B. "The brain of a human being cannot function the way any possible computer functions."

C. "The brain of a human being cannot function the way any possible physical setup functions."

To have an argument against physicalism, you need C. Let's agree on A. There are philosophers (Daniel Dennett is one) who believe that there is, or could be, a 'program' for a human brain, far more complex than any present day computer program, which in effect accomplishes the things I talked about above. It is able to reason, it demonstrates a sense of responsibility, it has values.

Opposing this view, are philosophers who are impressed by the very different structures of the brain and the computer. The brain works in a holistic, relational way, whereas computers process a series of commands, one after the other. The claim is that processing of series of instructions from a program can never be adequate for intelligence. I don't know whether this is true or not, but let's agree anyway.

That leaves B. The human brain is not like a computer. But it is physical all the same. The only way to 'construct' an intelligent subject is biologically, because of the intrinsic nature of the system involved.

How does considering the nature of will help us in deciding whether to go with B or C?

It is difficult to see how introspection as we go through the process of 'willing' an action can help. The materialist will say that, according to the physical story, we cannot be directly aware of the complext physical processes giving rise to our mental actions. It is part of what it is to be an intelligent subject that one has a sense of being able to 'will' to make decisions for oneself as opposed to following a series of instructions. But who is to say that this very sense is not the product of our brains following the biological 'instructions' preordained by nature?

All the best,

Geoffrey