Wednesday, August 24, 2011

Moral dialogue takes place between an 'I' and a 'thou'

To: Vasco K.
From: Geoffrey Klempner
Subject: Moral dialogue takes place between an 'I' and a 'thou'
Date: 16 July 2004 09:08

Dear Vasco,

Thank you for your email of 5 July, with your fourth essay for the Moral Philosophy program, in response to the question, ''Moral dialogue takes place... between and I and a thou.' Discuss.'

In your email you said that you found this essay 'tough'. This is a fine piece of work, which has helped move my thinking forward on this very difficult issue. I would like to include this essay (with minor editorial corrections) in the next issue of Philosophy Pathways, due out this weekend.

I had to look 'usance' up in a dictionary as the term is not in common use. In the context, I understand 'usance' to mean 'customary agreement'. The term is also used in a financial sense, concerning the time allowed for making payments (I think).

So when you say, 'mere willingness to engage in dialogue can bring tacit agreement to accept usance', I take this to mean, 'willingness to engage in dialogue implies a customary or previously agreed accommodation between the parties involved'.

The significance of this is that the description of moral dialogue as a negotiation 'from scratch', where everything has to be laid on the table, is an ideal, or an abstraction which has to be fitted to the way we actually live. In many cases, we know in advance what the other party or parties will say, what their interests are; we have the outcomes of our previous dialogues at our disposal. New dialogue arises when our customary agreements and accommodations require an adjustment of some sort to account for changing circumstances.

I agree that we enter dialogue 'prepared that at the end one may be proven totally wrong'. However, this does not imply that we have somehow to ditch all our values, beliefs, rules and expectations before we can start. On the contrary, if you take away a person's values, beliefs, etc. there is no longer any *person* there, no individual. Arguably, only an artificial intelligence could reason from no particular standing point. What is true, and this is a point made by Sartre, is that our values are not *given* facts. My values are merely the way I see things now, at this moment in time; there is always the possibility that I will come to see things in a radically different way. My values are not the *cause* of my present actions - to suppose that they are is what Sartre terms 'bad faith'.

The picture which we have described of moral dialogue is a representation of what ultimately makes a moral decision 'right' or 'wrong'. It represents the closest approximation to a 'theory of moral truth'. The truth is what we would reach in an ideal dialogue. However as you remark, in the real world circumstances are often far from ideal. In such cases, the decision we make is guided by our conception of the 'truth' towards which we *aim* our moral judgement - made often in difficult circumstances, with insufficient time to deliberate, where the parties involved are not able to state their case.

'We must not only try to see through the other's eyes, but we have to defend his point of view (as we see it) with the same vigour as we defend ours.' In the ideal case, the other does not need our help to defend his views. In the real world, when I make a moral decision which affects, say, a child or someone who does not have the resources to argue their case forcefully then, yes, I must in my private deliberations argue the other's case on his or her behalf.

At this point, you raise a very considerable difficulty: 'Imagine that we are aiming at something new, untried, but something we strongly belief in.... How ruthlessly [will we] defend our view... Only as far as we are willing to accept full responsibility and full blame... should we be proven wrong.'

Now, there is a problem with this. Here is a case where this would be true. A government wants to build a new type of nuclear power station. One of the scientists involved becomes convinced from his research that there is a flaw in the design. He has the moral responsibility to fight to convince the others. This is not an opportunity for 'moral dialogue', for reaching a compromise which best respects the values of all. Why not?

Clearly, because the *truth* with which we are concerned here is not a moral truth but concerns the factual circumstances. Is the design flawed or not? Will the power station become another Chernobyl? The duty of the dissenting scientist is to take on the responsibility of convincing the others at all costs, without compromise.

We should always be moral; but not every dialogue is primarily concerned with morals.

On the other hand, suppose that the safety of the power station is not the issue, but instead the argument concerns the competing needs of the local community, who will be adversely affected by the project, and the 'national interest'. Here there is no single individual and no group who can 'take on all the responsibility' of the decision.

Moral dialogue is different from 'mere horse trading'. This was the point of the question. An 'I' and a 'thou' do not horse trade. However, as we have seen above, in the real world there will often arise circumstances where one of the parties is more sophisticated, better at arguing his or her case. In such a case, the moral obligation is to use those skills to represent to oneself the interests of the other.

'Self-assertion' and 'self-sacrifice' is another topic that has come up, in relation to moral dialogue.

The point here is not that each of us has to 'balance self-assertion and self-sacrifice' in the same way but, on the contrary, that we are *not* all the same in this regard. (Most of us are somewhere between the extremes of a Picasso and a Mother Theresa.) There is no single rule for how much self-sacrifice is demanded, or how much self-assertion permitted. The circumstances of each human individual are unique. There is no rule here and moral dialogue does not supply the answer, other than to be 'true to oneself'.

All the best,

Geoffrey